Category: Tutorials

  • How to Cache POST Requests in Nginx

    How to Cache POST Requests in Nginx

    Caching can substantially reduce load times and bandwidth usage, thereby enhancing the overall user experience. It allows the application to store the results of expensive database queries or API calls, enabling instant serving of cached data instead of re-computing or fetching it from the source each time. In this tutorial, we will explore why and how to cache POST requests in Nginx.

    There are only two hard things in Computer Science: cache invalidation and naming things.

    — Phil Karlton

    Caching POST requests: potential hazards

    By default, POST requests cannot be cached. Their (usually) non-idempotent (or “non-safe”) nature can lead to undesired and unexpected consequences when cached. Sensitive data, like passwords, which these requests may contain, risk exposure to other users and potential threats when cached. Additionally, POST requests often carry large payloads, such as file uploads, which can significantly consume memory or storage resources when stored. These potential hazards are the reasons why caching POST requests is not generally advised.

    Although it may not be a good idea to cache POST requests, RFC 2616 allows POST methods to be cached provided the response includes appropriate Cache-Control or Expires header fields.

    The question: why would you want to cache a POST request?

    The decision to cache a POST request typically depends on the impact of the POST request on the server. If the POST request can trigger side effects on the server beyond just resource creation, caching should not been done. However, a POST request can also be idempotent/safe in nature. In such instances, caching is considered safe.

    Why and how to cache POST requests

    Recently, while working on a project, I found myself designing a simple fallback mechanism to ensure responses to requests even when the backend was offline. The request itself had no side effects, though the returned data might change infrequently. Thus, using caching made sense.

    I did not want to use Redis for two reasons:

    1. I wanted to keep the approach simple, without involving ‘too many’ moving parts.
    2. Redis does not automatically serve stale cache data when the cache expires or is evicted (invalidate-on-expire).

    As we were using Nginx, I decided to go ahead with this approach (see figure).

    The frontend makes a POST request to the server, which has an Nginx set up as a reverse proxy. While the services are up and running, Nginx caches them for a certain time and in a case where the services are down, Nginx will serve the cache (even if it is stale) from its store.

    http {
        ...
        # Define cache zone
        proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my-cache:20m max_size=1g inactive=3h use_temp_path=off;
        ...
    }
    
    location /cache/me {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://service:3000;
    
        # Use cache zone defined in http
        proxy_cache my-cache;
        proxy_cache_lock on;
        
        # Cache for 3h if the status code is 200/201/302
        proxy_cache_valid 200 201 302 3h;
        
        # Serve staled cached responses
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_methods POST;
    
        # ! This is important
        proxy_cache_key "$request_uri|$request_body";
    
        proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
    
        # Add header to the response
        add_header X-Cached $upstream_cache_status;
    }

    Things to consider

    In proxy_cache_key "$request_uri|$request_body", we are using the request URI as well as the body as an identifier for the cached response. This was important in my case as the request (payload) and response contained sensitive information. We needed to ensure that the response is cached on per-user basis. This, however, comes with a few implications:

    1. Saving the request body may cause a downgrade in performance (if the request body is large).
    2. Increased memory/storage usage.
    3. Even if the request body is slightly different, it will cause Nginx to cache a new response. This may cause redundancy and data mismatch.

    Conclusion

    Caching POST requests in Nginx may offer a viable solution for enhancing application performance. Despite the inherent risks associated with caching such requests, careful implementation can make this approach both safe and effective. This tutorial discusses how we can implement POST request caching wisely.

    Want to know how we can monitor server logs like a pro, using Grafana Loki?

    Suggested Readings

    1. Idempotent and Safe APIs
    2. Nginx Proxy Module
    3. Caching POST Requests with Varnish
  • Protected Routes in Next.js

    Protected Routes in Next.js

    If you are building a SaaS website that has awesome features or a simple website with minimal user functionality, you know Authentication and Authorization are crucial (difference between authentication and authorization). Protected Routes in Next.js help us ensure that unauthenticated users are not able to see routes/pages intended for logged in (authenticated) users. There are a few approaches to to implement Protected Routes in Next.js, i.e., enforce authentication for a page/route.

    But, first of all – why do we love Next.js? Next.js is arguably the most popular and go-to React framework. It packs some cool stuff including file-based routing, incremental static regeneration, and internationalization (i18n). With Next.js 13, we have got even more power – layouts and Turbopack!

    You might be wondering – why bother protecting routes? We are building a SaaS product with a Next.js frontend and Nest.js backend. We have implemented authentication in the backend but we also need to ensure that forced browsing* is prevented and User Experience is enriched. Actual authentication logic should reside inside our back-end logic. All the API calls must be appropriately authenticated. In our app, whenever there is an unauthenticated request it returns 401 Unauthorized. An ACL is also in place so whenever user requests a resource they do not have access to, the backend returns 403 Forbidden.

    Now, let’s create a route protection flow in Next.js.:
    If a user requests a protected route (something that requires authentication), we redirect them to the login page.
    We should not prevent access if a route is public (supposed to be viewed regardless of the users’ authentication state) like a login page.

    At the end, the goals are simple: safety and security.

    Jodi Rell

    Using RouteGuard

    The concept of a RouteGuard is simple. It is a wrapper component that checks whether the user has access to the requested page on every route change. To track the access, we use one states: authorized. If authorized is true, then the user may see the page or else user is redirected to the login page. To update the state, we have a function authCheck() which prevents access (sets authorized to false) if the user does not have access and the page is not public (e.g. landing page, login page, sign-up page).

    import { Flex, Spinner } from '@chakra-ui/react';
    import { useRouter } from 'next/router';
    import publicPaths from '../data/publicPaths';
    import { useAppDispatch, useAppSelector } from '../hooks/storeHooks';
    import { setRedirectLink } from '../redux/AuthSlice';
    import {
      JSXElementConstructor,
      ReactElement,
      useEffect,
      useState,
    } from 'react';
    
    const RouteGuard = (props: {
      children: ReactElement<unknown, string | JSXElementConstructor<unknown>>;
    }) => {
      const { children } = props;
    
      const router = useRouter();
      const [authorized, setAuthorized] = useState(false);
      const user = useAppSelector((state) => state.auth);
    
      const dispatch = useAppDispatch();
    
      useEffect(() => {
        const authCheck = () => {
          if (
            !user.isLoggedIn &&
            !publicPaths.includes(router.asPath.split('?')[0])
          ) {
            setAuthorized(false);
            dispatch(setRedirectLink({ goto: router.asPath }));
            void router.push({
              pathname: '/login',
            });
          } else {
            setAuthorized(true);
          }
        };
    
        authCheck();
    
        const preventAccess = () => setAuthorized(false);
    
        router.events.on('routeChangeStart', preventAccess);
        router.events.on('routeChangeComplete', authCheck);
    
        return () => {
          router.events.off('routeChangeStart', preventAccess);
          router.events.off('routeChangeComplete', authCheck);
        };
      }, [dispatch, router, router.events, user]);
    
      return authorized ? (
        children
      ) : (
        <Flex h="100vh" w="100vw" justifyContent="center" alignItems="center">
          <Spinner size="xl" />
        </Flex>
      );
    };
    
    export default RouteGuard;

    Note: we are using Redux to store the user’s data; authentication is out of the scope of this blog post.

    Implementing the Middleware

    In a scenario where the users’ session expires while they are on a protected page, they will not be able to fetch newer resources (or perform any actions for that matter). That’s, once again, really bad UX. We cannot expect a user to refresh, so we need a way to let them know that their session is no longer valid.

    To implement the same, we will use another awesome Next.js feature – Middlewares! In few words, a middleware sits between your server and the frontend. Middleware allows you to run code before a request is completed, then based on the incoming request, you can modify the response by rewriting, redirecting, modifying the request or response headers, or responding directly.

    After session expiration, whenever the user makes a request, it will result in 401 Unauthorized. We have implemented a middleware which listens to the response for each request that is being made from the frontend; if the request results in 401 Unauthorized, we dispatch the same action, i.e. log out the user and redirect to the login page.

    import {
      MiddlewareAPI,
      isRejectedWithValue,
      Middleware,
    } from '@reduxjs/toolkit';
    import { logout } from '../redux/AuthSlice';
    import { store } from '../redux/store';
    
    interface ActionType {
      type: string;
      payload: { status: number };
      meta: {};
      error: {};
    }
    
    const unauthenticatedInterceptor: Middleware =
      (_api: MiddlewareAPI) =>
      (next: (action: ActionType) => unknown) =>
      (action: ActionType) => {
        if (isRejectedWithValue(action)) {
          if (action.payload.status === 401 || action.payload.status === 403) {
            console.error('MIDDLEWARE: Unauthorized/Unauthenticated [Invalid token]');
            store.dispatch(logout());
          }
        }
    
        return next(action);
      };
    
    export default unauthenticatedInterceptor;

    Suggested Readings

  • Getting Started with DatoCMS – Creating a simple blog

    Getting Started with DatoCMS – Creating a simple blog

    DatoCMS is a relative newcomer in the CMS industry. There are plenty of Content Management Systems (CMS). A CMS is a powerful piece of software that helps users to create, manage, and modify content on a website without requiring special technical expertise and without the need to code. WordPress is the most popular CMS with over 43% of the market share. Some other examples of CMS are Drupal, Joomla.

    DatoCMS rolled out for public use in 2019, is a cloud-based headless CMS for mobile apps, static websites, and server-side applications. The concept of headless CMS is relatively newer. It was born out of the modern need to serve content across multiple channels, like web apps, mobile apps, IoT devices, and wearable software. A headless CMS attempts to reduce the modules attached to the software since there is no presentation layer attached; the developer has the flexibility of serving the content across a wide range of channels.

    Why DatoCMS?

    DatoCMS is API-first. Every software powered by DatoCMS makes use of two APIs to work with the content – Content Delivery API, and Content Management API. To perform operations on the content programmatically, we make use of the Content Management API. To retrieve the content for displaying purposes, we make use of the Content Delivery API.

    DatoCMS generates a static website, that results in a faster and more secure website. It fits in very well with frameworks, technologies and generators like React, Next.js, Jekyll, Nuxt, Vue, PHP, Middleman, and Ruby on Rails. It is flexible, offers granular permissions, and comes with a GraphQL API.

    DatoCMS does not build and deploy your website, but delegates it to an external CI/CD services. DatoCMS offers integration with the following services out of the box:

    To read more about the general concepts of DatoCMS, please refer the official docs.

    Creating a DatoCMS Account & Starting a New Project

    Well, signing up for any service should be a piece of cake, really! Just head to https://dashboard.datocms.com/signup and you know the drill!

    Once you sign up, you will be guided to the dashboard. DatoCMS is free for upto 3 projects and that should suit our use-case. DatoCMS Dashboard is easy to navigate, considering that the target audience is non-technical clients. We will start by hitting the ‘New Project’ button.

    Click on ‘Demo project’ and you will be lead to the page of starter projects. We will be using Next.js Blog starter project.

    Now, click on ‘Start free project’. Name your project and choosing a hosting solution. We will use Vercel to host your app.

    Configuring Vercel for DatoCMS Blog

    Clicking on ‘Click project’ should open a new tab for Vercel configuration. First, we need to create a GitHub repository to enable CI/CD. I will create a private repo but it is absolutely all right to keep your repo public.

    Now, we must integrate DatoCMS with the Vercel app (for obvious reasons).

    The next and the final step is to deploy our Git repo to Vercel. It should happen automatically and you should see a nice-looking message on your DatoCMS Dashboard.

    Editing the DatoCMS Blog

    We have successfully deployed our blog. Now, we will modify global settings like Blog Description, SEO, and Favicon from the Dato Editor. Just click on your project in the dashboard and hit ‘Enter Project’. Once you are inside the editor, you will see a lot of settings in the left sidebar. Those are all the customizable options and essentially the content we can edit without having to tweak the codebase. To modify the favicon and SEO settings, we will click on ‘Settings’ option.

    Now, let’s configure the homepage settings like Blog Title using the ‘Homepage’ option from the left sidebar.

    Exploring DatoCMS Blog Models

    DatoCMS handles editable data as a scheme called ‘Models’, which you can think of as Tables. Each model has some set of user-editable fields, like Post Title, Post Content, Featured Image. Models are modular and provide a user-friendly editing experience.

    To see our models, we will click on ‘Settings’ in the top bar. Once we are there, we will click on ‘Models’ in the left sidebar. Our starter project has four pre-defined models – Author, Blog, Category, and Post. Each model can have a collection of records/instances, like we can have multiple posts on a single website.

    Our Post model has a number of fields – it has a Title, Author (which is linked to another model ‘Author’), and a structured text field to store the post Content. Then, it has some fieldsets, which are models inside models (sort of). Those handle our previews and post metadata.

    To know more about DatoCMS Models, you can check out DatoCMS’ documentation.

    Adding Content

    Now that we have an overview of what a model is, we will add a few posts. To add content (posts) to our blog, we will click on ‘Content’ in the top menu. Inside the left sidebar, we will have all our models. Before we do that, let’s add an author so that we can link it with our posts.

    Let’s create a post. I will quickly copy my own post about how you can setup Grafana and Loki to monitor your server logs ?. DatoCMS Editor supports rich-text, and we can add things like lists, images, headings, and quotes by pressing the '/' key (just like WordPress). We will add stuff like quotes, and code blocks (DatoCMS Editor has some awesome codeblock editing experience).

    Similarly, we can add the other two fieldsets – Preview and Metadata. Now, we are ready to publish!

    Building the Blog

    Once you are done with all the editing, you need to deploy the blog by ‘building’. In the top-right corner, click on ‘Build Status’ and hit ‘Build Now’. This will generate a static site and deploy it on Vercel. The logs can be checked in Settings > Activity Logs.

    Bravo! Our website is live now ?. Click on ‘Visit Site’ in the ‘Build Status’ dropdown to see your site. You can check mine here.

    What’s Next?

    In the next blog posts, I plan to go through how we can test our DatoCMS app/website locally, edit the code, and make it more customizable. Also, maybe we can check out how it compares to WordPress ?!?

    I would like to thank the contributors of the DatoCMS Blog Starter Project. Do check out the repo.

  • Grafana Loki – How to Monitor Server Logs Like a Pro!

    Grafana Loki – How to Monitor Server Logs Like a Pro!

    If you love working with servers, you must have wanted a beautiful and efficient way to monitor your server logs. Grafana Loki does just that! Loki is a log aggregation system that is horizontally-scalable and highly-available. Inspired by Prometheus, Loki integrates pretty well with Grafana and is an amazing tool to monitor your server logs. It is fast since it does not index the content of the logs, but rather labels each set of log stream. In this tutorial, we will discuss how to set up Grafana Loki and integrate it with Grafana Dashboard. We will also learn how to add Nginx logs to the explorer.

    The ‘Grafana’ Plan

    I want to monitor my personal server’s (let us call it the ‘source’) logs. My personal server is a Hetzner VPS and runs on Ubuntu Server 20.04 LTS. I plan to use an Amazon EC2 free-tier t2.micro instance to serve the Grafana Dashboard over HTTP. The source will run Grafana Loki inside a Docker container over port 3100. The dashboard will also have an Ubuntu Server 20.04 LTS.

    Setting up the Grafana Dashboard

    To serve our Grafana Dashboard, we will use an Amazon EC2 free-tier t2.micro instance running Ubuntu Server 20.04 LTS. Now, choosing the cloud service provider is completely upto you. You can also set up the dashboard locally or on a Raspberry Pi, if you have one. If you do not know how you can expose your Raspberry Pi to the public without a public IP, here is a guide for you. We do not need to do anything special, but make sure you allow access to port 80 (HTTP). Once it is done, connect to your instance via SSH.

    Installing Grafana

    Now, we need to install Grafana. We will install the Grafana Enterprise edition, but if you wish to go for the OSS release, you can follow this guide.

    sudo apt-get install -y apt-transport-https
    sudo apt-get install -y software-properties-common wget
    wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

    To install the stable release, we need to add the repository using the following command:

    echo "deb https://packages.grafana.com/enterprise/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list

    If everything goes well, we should be ready to install the grafana-enterprise package.

    sudo apt-get update
    sudo apt-get install grafana-enterprise

    Starting Grafana

    Now that Grafana is successfully installed, we need to start the service. To check if the Grafana service is loaded, we need to use the following command:

    systemctl status grafana-server.service

    We should see that the service is loaded. Let’s enable the service now.

    systemctl enable grafana-server.service

    Finally, let’s start the service.

    systemctl start grafana-server.service

    Let’s check if everything works well. In my case, I will navigate to 34.201.129.128:3000.

    The default username is admin and password is admin.

    Redirecting Port 80 to Grafana Port

    We want to access Grafana Dashboard over HTTP but to bind Grafana to ports below 1024, we need to run grafana as root. Upon installation, Grafana creates a grafana user and the service runs under that user. We will redirect port 80 to 3000 using iptables to be able to access Grafana over HTTP.

    iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

    Now, I will create an A record pointing to the IP so that I can access Grafana over http://dummydash.danishshakeel.me.

    Next step is to add a data source (Loki) to Grafana.

    Setting up Grafana Loki

    In order for Grafana to be able to fetch our server logs, we need to install Grafana Loki on our source. We will use Docker for this so make sure Docker is installed and working. First, let us pull the repository.

    git clone https://github.com/grafana/loki.git

    Now, let us cd into loki/production/ directory and pull the required images.

    docker-compose pull

    This will pull three images – loki, promtail, and grafana. We are ready to spin our containers.

    docker-compose up

    This will make Loki available via port 3100.

    Adding Loki as a Data Source

    We are ready with Grafana Dashboard and Loki. Now, we need to integrate them. Head to your Grafana Dashboard > Gear Icon ⚙ > Data Source

    Click on ‘Add Data Source’ and choose Loki under Logging and Document Databases. Now, we will configure the Data Source.

    We are all set! We should now be able to explore our logs from the dashboard.

    Exploring Logs

    To explore the logs, click on the Explore option (?) in the sidebar. Click on the Data Source and you should be able to see the list of log files, select one to see the logs. You can also type in your query, for example: {filename="/var/log/syslog"} will yield the logs from syslog.

    Exploring Nginx Logs

    Loki will not store Nginx logs out of the box. We need to configure our deplyoment to do that. The default configuration for promtail is located at /etc/promtail/config.yml. To check it, we need to first initiate a shell session in our promtail container. We can get a list of running containers by docker ps. Copy promtail container’s id and run:

    docker exec -it <container_id> bash

    We will create our own configuration to access Nginx logs. Cd into /opt/loki/ on your host machine and create a new file – promtail-config.yml.

    Add the following configuration to the file:

    server:
      http_listen_port: 9080
      grpc_listen_port: 0
    
    positions:
      filename: /tmp/positions.yaml
    
    clients:
      - url: http://loki:3100/loki/api/v1/push
    
    scrape_configs:
      - job_name: system
        static_configs:
        - targets:
            - localhost
          labels:
            job: varlogs
            __path__: /var/log/*log
      - job_name: nginx
        static_configs:
        - targets:
            - localhost
          labels:
            job: nginx
            __path__: /var/log/nginx/*log
    

    We are simply adding another job and specifying the path to our nginx logs.

    Once our configuration file has been added, we need to edit our compose file and map the configuration file from our host to the promtail container.

    ...
    ...
        promtail:
            image: grafana/promtail:master
            volumes:
              - /opt/loki/promtail-config.yml:/etc/promtail/new-config.yaml
              - /var/log:/var/log
            command: -config.file=/etc/promtail/new-config.yaml
            networks:
              - loki
    ...
    ...

    You should now be able to see access.log and error.log in your Grafana Explorer.

    There you have it! We have successfully configured Grafana and Grafana Loki to monitor our server logs. We have also learnt to configure Nginx with Promtail to serve logs to Grafana.

    What’s next?

    Certainly, we want to create a centralized dashboard for all our logs as well as system metrics like Disk and CPU usage. In future, I will discuss how we can add Prometheus to Grafana and monitor our system metrics.

  • Alfred – Slack Bot to Post Birthday and Anniversary Messages

    Alfred – Slack Bot to Post Birthday and Anniversary Messages

    At rtCamp, the family is expanding. We wanted to automate the process of sending birthday and work anniversary wishes to our Slack workspace, and so Alfred was born. Alfred lets you send birthday and work anniversary messages using the Slack API. It uses Google Apps Script, a cloud-based JavaScript platform, and integrates seamlessly to Google Sheets.

    Alfred uses a single Google Sheet as the database for users’ data and wishes that you want to send out. Using time-based Google Triggers, one can run the Google Apps Script at specific intervals to send out the (random) messages automatically using the Slack API. Alfred is an open-source project, and you can find it on GitHub – https://github.com/danish17/alfred-slack-bot.

    Let’s take a look at how you can set up Alfred for your workspace.

    Creating a Google Sheet

    Google Sheet is where your data will live. The spreadsheet will have two sheets, let’s call them Data and Messages. The Data sheet will store all the details about the users, like their birthdates, anniversary dates, and their names. The Messages sheet will contain a list of the wishes and their type, Birthday or Anniversary.

    The Message spreadsheet should have the following columns:

    1. Text – wish text
    2. Type – whether the wish is for a birthday or an anniversary

    In the Text, you can add text placeholders like <names>; Alfred will automatically replace them with the recipients’ names. For example, if the text is – “I wish <names> a very happy birthday”, Alfred will generate the wish – “I wish John Doe, and Lionel Messi a very happy birthday”.

    If there are mutliple wishes for each type, Alfred will randomly choose one.

    You can download a demo sheet (.xslx) using this link – https://github.com/danish17/alfred-slack-bot/blob/master/example-data/Test-Data.xlsx

    Setting up the script

    Now it is time to add Alfred to your Google Sheet. For that, we need to create a script and import Alfred as a library.

    Open your sheet and click on Tools > Script Editor

    It will open Google Apps Script Console.

    Importing Alfred

    Before Alfred can serve us, we need to add him to the script. To do that, click on the Plus Icon (+) in front of Libraries in the left sidebar.

    Paste the following Script ID: 1u4gU_yqTtdvhckO5JymTXz87MDKerxg8jc2bPeO4x6ATRS8O7cEs7eoj

    Click on Look Up and it should detect Alfred. In Version, choose the latest version and hit Add. You may also choose HEAD (Development Mode) but it may cause breaking changes, so it is not recommended.

    Note: There may be permission issues. Kindly drop a mail on [email protected] so that I can share access with you.

    To check if Alfred is correctly working, you can add the following code and hit Run.

    function main() {
      if (Alfred) Logger.log( 'Added' )
    }

    Configuring Alfred

    Now, we need to configure how Alfred works. The first step to configuring Alfred is to add Alfred to your Slack workspace and obtain an Incoming Webhook URL.

    Adding Alfred to Slack & Getting Webhook URL

    To add Alfred, you can click the button below or visit the URL: https://slack.com/oauth/v2/authorize?scope=incoming-webhook,chat:write&client_id=2618518958503.2630472038933

    Add to Slack

    After clicking, you will be redirected to alfred.danishshakeel.me and you will be shown the Incoming Webhook URL. Copy it, and save it.

    Adding Alfred to your Google Script

    Now that we have obtained the Webhook URL, we can move ahead. Open your Google Script and add the boilerplate code:

    function alfredExample() {
      // Instantiate a new config object with the Slack Webhook URL.
      const config = createConfig(YOUR_SLACK_WEBHOOK_URL_HERE)
    
      // Set parameters.
      config.dataSheet = YOUR_DATA_SHEET_NAME_HERE // Set name of the sheet containing data.
      config.messageSheet = YOUR_WISH_SHEET_NAME_HERE // Set name of the sheet containing messages.
      config.dobColumnKey = YOUR_BIRTHDATE_COLUMN_KEY_HERE // Birthdate column key.
      config.annivColumnKey = YOUR_ANNIVERSARY_COLUMN_KEY_HERE // Joining Date/Anniversary column key.
      config.namesColumnKey = YOUR_NAMES_COLUMN_KEY_HERE // Names column key.
      const date = new Date() // Init a date object.
      // date.setDate(date.getDate() - 1) // Example: match events for yesterday.
      config.dateToMatch = date // Set date.
    
      // Configure messages.
      config.birthdayHeader = YOUR_BIRTHDAY_HEADER_HERE
      config.birthdayImage = YOUR_BIRTHDAY_IMAGE_URL_HERE
      config.birthdayTitle = YOUR_BIRTHDAY_TITLE_HERE
      config.anniversaryHeader = YOUR_ANNIVERSARY_HEADER_HERE
      config.anniversaryImage = YOUR_ANNIVERSARY_IMAGE_URL_HERE
      config.anniversaryTitle = YOUR_ANNIVERSARY_TITLE_HERE
    
      // Run Alfred.
      runAlfred(config);
    }
    

    For our example:

    • config.dataSheet = 'Data' (Sheet name which contains users’ data)
    • config.messageSheet = 'Messages' (Sheet name which contains wishes)
    • config.dobColumnKey = 'DOB' (Column name which contain birthdates)
    • config.annivColumnKey = 'Joining' (Column name which contains anniversary dates)
    • config.namesColumnKey = 'rtCamper' (Column name which contains users’ names)

    For the messages, you can choose almost anything including some markdown and emojis.

    Test

    You can use Alfred.testAlfred(config)to test Alfred.

    Note: It will send a message to the Slack channel that you've given Alfred access to.

    Setup Automatic Messaging (Triggers)

    You would want to make Alfred automatically wish people everyday. For this purpose, we will utilize Google’s Time-Based Triggers. In your Script Console, click on the Timer icon (⏰).

    Now, click on ‘Add Trigger’ button. We will run our function everyday at 8AM – 9AM Indian Standard Time.

    Voila! Alfred is all ready to send out wishes in your Slack workspace every day.

    Future of Alfred

    There are a lot of things that we can add to Alfred. Although Alfred was born to send out birthday and anniversary wishes, we can add the ability to send out important notifications.

    In the short-term, I am planning to add:

    1. Ability to create custom layouts
    2. Use separate Google Spreadsheets for Data and Messages
    3. Add codebase for JavaScript and Python

    I want to see Alfred as an all-in-one Slack bot that behaves like the real ‘Alfred Pennyworth’

  • Creating an SSH Tunnel using Cloudflare Argo and Access

    Creating an SSH Tunnel using Cloudflare Argo and Access

    I had always wanted to access my home server, running on a Raspberry Pi 4, from outside the local network. The most straightforward answer seemed to be getting a static IP from the ISP; however, both of my ISPs did not help me with that. I forgot about it for a while but when I flashed my Pi a couple of days ago I knew that I had to do it. Being able to SSH and rsync into my Pi on the fly is pretty cool! Today we will learn how to create an SSH Tunnel using Cloudflare’s Argo and Access.

    I tried this script to update the Cloudflare DNS records with my public IP. In addition to the script, I used crons to automically handle updates every minute, but it did not work. It turns out that my ISPs are using CGNAT and I have to create port forwarding rules in ISP’s router for this method to work, which will never be allowed. I came across Cloudflare Argo which lets you tunnel services running locally to Cloudflare.

    Installing Cloudflared

    Cloudflared (pronounced: cloudflare-dee) is a light-weight server-side daemon which lets you connect your infrastructure to Cloudflare. Using cloudflared we will create an ssh tunnel. The installation is straightforward, and you can find the compatible package here. We will install ARM cloudflared .deb package on our Raspberry Pi.

    Once downloaded, we will use dkpg to install the package.

    $ dkpg -i <path_to_the_deb_package>

    We can verify the installation using this command:

    $ cloudflared -V
    cloudflared version 2021.9.2 (built 2021-09-28-1343 UTC)

    Setting up Cloudflare Access

    Next, we will create a subdomain and secure it with Cloudflare Access. Access secures SSH connections and other protocols with Cloudflare’s global network, with a Zero-Trust Approach.

    Login to your Cloudflare account and choose your domain. On the Dashboard, click on ‘Access‘.

    Next, we need to create an ‘Access Policy’. Click on ‘Create Access Policy Button’ in the ‘Access Policies’ section.

    The users will be able to attempt to gain access to ‘Raspberry Pi Server’ on pi.danishshakeel.me and each session will expire after 24 hours.

    Creating SSH Tunnel

    Before we can create a tunnel, we need to login to cloudflared.

    $ cloudflared tunnel login

    This command will provide a link using which you can authorize the Argo tunnel. Select the domain on which you wish to authorize Argo. Now, we can create an Argo SSH Tunnel using the following command:

    $ cloudflared tunnel --hostname <subdomain> --url <url_to_service>

    We want to tunnel SSH on localhost to pi.danishshakeel.me. The command will look like this:

    $ cloudflared tunnel --hostname pi.danishshakeel.me --url ssh://localhost:22

    To verify, we can check our DNS records in Cloudflare. They should have an AAAA record for our subdomain.

    Connecting to the SSH Tunnel

    In order to connect to the tunnel, we need to install cloudflared on the client. After installing, we need to run:

    $ cloudflared access ssh-config --hostname pi.danishshakeel.me

    This will give the required configuration that we need to add to SSH configuration. The configuration will look like this:

    Host pi.danishshakeel.me
      ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h

    Now, to connect to the SSH, we will do ssh username@subdomain. For my Raspberry Pi, username is pi and hostname is pi.danishshakeel.me. This command will also output a link using which we need to authorize the connection.

    You should be able to successfully ssh into your server. Remember that you need to start the tunnel before trying to access it.

  • Wildcard SSL Certificate on Linode using Certbot

    Wildcard SSL Certificate on Linode using Certbot

    I recently migrated to Linode for my personal portfolio and project (proof of concept) websites. I am running Ubuntu Server 20.04 LTS on a 1GB Nanode. Most of my websites use WordPress and I use Nginx, MariaDB, PHP (LEMP) as my stack. I use a Multisite Network since it let’s me manage all my websites from a single dashboard.

    Initially, I was using a single site, so I used Certbot to install a Let’s Encrypt SSL Certificate. If you plan to host only one site on your server then you should be good to go with a single Certbot command; however, if you’ve, or plan, to run more than one site on your server, the process is different. Let’s learn how we can install wildcard SSL certificates on Linode.

    Generating a Token

    To let Certbot manage your DNS Records, we first need to generate an API token or Personal Access Token (PAT). To generate an API token:

    1. Log in to your Linode account
    2. Click on your Profile & Account settings
    3. Choose API Tokens

    Once you’re in, click on ‘Create a Personal Access Token’ option.

    Create a new token that can read/write your Domain Records. Since you’ll most likely be using this token just for Certbot, you can disable all the other privileges.

    Click on ‘Create Token’, copy the generated token and save it somewhere safe. The tokens cannot be viewed again, so if you lose it, you’ll have to regenerate it.

    Now, create an .ini file to store your token. Your .ini file should look like this:

    # Linode API Credentials .ini file
    dns_linode_key = <YOUR_API_KEY>
    dns_linode_version = 4

    Installing Certbot

    Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS. We’ll use certbot package and python3-certbot-dns-linode plugin.

    Now, we can install the Certbot.

    sudo apt install certbot python3-certbot-dns-linode

    Generating Certificate

    We’ll not use Certbot’s automatic Nginx configuration, we’ll use Certbot to generate a certificate and then manually edit our Nginx files.

    To generate a certificate:

    certbot certonly --dns-linode --dns-linode-propagation-seconds <TIME_IN_SEC> -d <YOUR_DOMAIN> -d "*.<YOUR_DOMAIN>"

    For my website, the command will look like this:

    certbot certonly --dns-linode --dns-linode-propagation-seconds 180 -d danishshakeel.me -d "*.danishshakeel.me"

    We are using ‘*’ to let Certbot know that all the subdomains, such as blog.danishshakeel.me, hire.danishshakeel.me, or www.danishshakeel.me should be able to use the certificate. –dns-linode-propagation-seconds is the time (in seconds) for which we wait for the changes to propagate to the server before asking the ACME servers to verify.

    Certbot will ask you to input the path of the .ini file which we created.

    Input the path to your Linode credentials INI file (Enter 'c' to cancel): <PATH_TO_INI_FILE>
    Waiting 180 seconds for DNS changes to propagate
    Waiting for verification...
    Cleaning up challenges

    Congratulations, we have successfully generated our certificate and chain. Note down the path to the fullchain.pem and privkey.pem.

    Configuring Nginx

    Now, we can configure Nginx to use our certificate.

    options-ssl-nginx.conf

    Before we can edit our Nginx configurations, we need to ensure that options-ssl-nginx.conf exists in /etc/letsencrypt directory. In case it does not, we can simply create one and copy-paste this content into it.

    # This file contains important security parameters. If you modify this file
    # manually, Certbot will be unable to automatically provide future security
    # updates. Instead, Certbot will print and log an error message with a path to
    # the up-to-date file that you will need to refer to when manually updating
    # this file.
    
    ssl_session_cache shared:le_nginx_SSL:10m;
    ssl_session_timeout 1440m;
    ssl_session_tickets off;
    
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers off;
    
    ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";

    Configuring Nginx Server

    Now, let’s cd into our Nginx sites-available directory

    cd /etc/nginx/sites-available

    Now, we need to open our configuration file. I am using the default server block as my configuration.

    sudo vi /etc/nginx/sites-available/default

    Inside the server block, we need to add a few lines:

    server {
    ...
    
    listen [::]:443 ssl ipv6only=on;
    listen 443 ssl;
    ssl_certificate <FULLCHAIN_PEM_PATH>;
    ssl_certificate_key <PRIVKEY_PEM_PATH>;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    }

    Voila! You have successfully configured Let’s Encrypt Wildcard SSL Certificate on Nginx using Certbot.

    Footnotes:

    The process is similar for other providers, provided the provider is supported by Certbot. Here is the list of supported providers.

  • Configure Logitech MX Master 3 on Linux (LogiOps)

    Configure Logitech MX Master 3 on Linux (LogiOps)

    I was a Windows user until very recently, when I decided to switch to some Linux distribution as my daily driver. I chose Zorin OS 16 Pro, primarily because – (1) it is based on Ubuntu, which I have been using on my Raspberry Pi 4 for a while (2) it comes with pre-installed apps (which saved me a couple of hours). The only reason I reluctant to switch because Linux does not support Adobe CC out of the box and it does not support Logitech Options.

    MX Master 3 is one of my prized possessions, it is very close to my heart. It is one of the finest mice that I’ve ever had, and it feels really nice. Although I do not use all the MX Master 3 buttons and gestures, I still wanted to be able to configure SmartShift and DPI. Fortunately, LogiOps functions more or less like Logitech Options, albeit all on command line.

    Installing LogiOps

    Fire your terminal (of course) install dependencies

    sudo apt install cmake libevdev-dev libudev-dev libconfig++-dev 

    After this, you need to clone the LogiOps GitHub repo

    git clone https://github.com/PixlOne/logiops.git

    Next, you need to build the source. You can refer to this link for that.

    Once you’re done with building the project, to install, run

    sudo make install

    Enable and start the daemon by running the following command

    sudo systemctl enable --now logid

    You should be able to run logid by running

    sudo logid

    The output should look something like this:

    [WARN] Error adding device /dev/hidraw2: std::exception
    [INFO] Detected receiver at /dev/hidraw1
    [WARN] Error adding device /dev/hidraw5: std::exception
    [INFO] Detected receiver at /dev/hidraw4
    [WARN] Error adding device /dev/hidraw4: No DJ reports
    [INFO] Device found: Wireless Mouse MX Master 3 on /dev/hidraw1:1
    

    Configuring

    The configuration file resides in – /etc/logid.cfg. If it does not exist, you can simply create it by touch logid.cfg.

    Open the logid.cfg and paste the contents from this GitHub Gist.

    // Logiops (Linux driver) configuration for Logitech MX Master 3.
    // Includes gestures, smartshift, DPI.
    // Tested on logid v0.2.3 - GNOME 3.38.4 on Zorin OS 16 Pro
    // What's working:
    //   1. Window snapping using Gesture button (Thumb)
    //   2. Forward Back Buttons
    //   3. Top button (Ratchet-Free wheel)
    // What's not working:
    //   1. Thumb scroll (H-scroll)
    //   2. Scroll button
    
    // File location: /etc/logid.cfg
    
    devices: ({
      name: "Wireless Mouse MX Master 3";
    
      smartshift: {
        on: true;
        threshold: 15;
      };
    
      hiresscroll: {
        hires: true;
        invert: false;
        target: false;
      };
    
      dpi: 1500; // max=4000
    
      buttons: (
        // Forward button
        {
          cid: 0x56;
          action = {
            type: "Gestures";
            gestures: (
              {
                direction: "None";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_FORWARD" ];
                }
              },
    
              {
                direction: "Up";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_PLAYPAUSE" ];
                }
              },
    
              {
                direction: "Down";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA" ];
                }
              },
    
              {
                direction: "Right";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_NEXTSONG" ];
                }
              },
    
              {
                direction: "Left";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_PREVIOUSSONG" ];
                }
              }
            );
          };
        },
    
        // Back button
        {
          cid: 0x53;
          action = {
            type: "Gestures";
            gestures: (
              {
                direction: "None";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_BACK" ];
                }
              }
            );
          };
        },
    
        // Gesture button (hold and move)
        {
          cid: 0xc3;
          action = {
            type: "Gestures";
            gestures: (
              {
                direction: "None";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA" ]; // open activities overview
                }
              },
    
              {
                direction: "Right";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA", "KEY_RIGHT" ]; // snap window to right
                }
              },
    
              {
                direction: "Left";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA", "KEY_LEFT" ];
                }
    		  },
    
    		  {
                direction: "Up";
                mode: "onRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA", "KEY_UP" ]; // maximize window
                }
    		  },
    		  
    		  {
                direction: "Down";
                mode: "OnRelease";
                action = {
                  type: "Keypress";
                  keys: [ "KEY_LEFTMETA", "KEY_DOWN" ]; // minimize window
                }
              }
            );
          };
        },
    	
        // Top button
        {
          cid: 0xc4;
          action = {
            type: "Gestures";
            gestures: (
              {
                direction: "None";
                mode: "OnRelease";
                action = {
                  type: "ToggleSmartShift";
                }
              },
    
              {
                direction: "Up";
                mode: "OnRelease";
                action = {
                  type: "ChangeDPI";
                  inc: 1000,
                }
              },
    
              {
                direction: "Down";
                mode: "OnRelease";
                action = {
                  type: "ChangeDPI";
                  inc: -1000,
                }
              }
            );
          };
        }
      );
    });

    This configuration will set the DPI to 1500 and SmartShift sensitivity to 15.

    Key Bindings and Actions

    ButtonActionPerforms
    Mode Shift ButtonPressSwitch between Ratchet and Free Scroll mode
    Mode Shift Button Hold + Swipe UpIncrease the DPI by 1000
    Mode Shift ButtonHold + Swipe DownDecrease the DPI by 1000
    Gesture ButtonPressActivities Overview
    Gesture ButtonHold + Swipe RightSnap the window to right
    Gesture ButtonHold + Swipe LeftSnap the window to left
    Gesture ButtonHold + Swipe DownMinimize the window
    Gesture ButtonHold + Swipe UpMaximize the window
    Back ButtonPressGo Back
    Forward ButtonPress Go Forward
    Forward ButtonHold + Swipe UpPlay/Pause Media
    Forward ButtonHold + Swipe DownSuper/Windows Key
    Forward ButtonHold + Swipe RightNext Song
    Forward ButtonHold + Swipe LeftPrevious Song
    Configurations

    For more information on configuration, you may refer to this wiki. To learn more about the Linux Event Codes, like KEY_LEFTMETA, check out this link.

    Conclusion

    There are two things that won’t work with this logid.cfg:

    1. The thumb scroll wheel (useful for switching between tabs)
    2. Scroll press (I personally use it to emulate Ctrl + B in VS Code)

    It would be nice to have Logitech Options on Linux since the product information shows that it is ‘compatible’ with Linux, Windows, and Mac. Logitech Options lets you choose app-specific settings which is something that I miss very much, but while developers at Logitech work on Logitech Options for Linux (hoping that they are) LogiOps is the best tool we have to configure most of its functionality.

    Issue ‘Forward/Back Button Not Working in VSCode’:

    If your Logiop configuration is not working in VSCode, please follow these steps:

    1. Open your logid.cfg file (it will be located at /etc/logid.cfg if you have followed my tutorial).
    2. Navigate to the desired section (forward button and back button have cid: 0x56; and cid: 0x53; respectively).
    3. Change the ‘type’ from Gesture to Keypress.
    4. Bind desired keys to it (for event codes, look here).
    5. Open VSCode and go to Keyboard Shortcuts (Ctrl + K Ctrl + S).
    6. Bind your favourite action to the keys.

    For example, if I want to bind Toggle Tabs to Back Button, I will change:

    gestures: (
    {
    direction: "None";
    mode: "OnRelease";
    action = {
    type: "Keypress";
    keys: [ "KEY_BACK" ];
    }
    }
    );
    };

    to

    action = {
    type: "Keypress";
    keys: [ "KEY_LEFTCTRL", "KEY_PAGEDOWN" ];
    };

    Thanks to Eduardo for pointing it out.

    You can achieve the same using VSCode Key Bindings, as suggested by Vladimir:

    1. Using the VSCode Settings (UI), find the “Go Back” action in the ‘Shortcuts’ settings
    2. Click on “Add Keybinding”
    3. Click the ‘back’ button on the mouse
    4. Repeat the steps for the ‘forward’ button

    Snap minimize or maximize window below cursor:

    The default snap behaviour controls only the active window, i.e., the window which is selected (clicked upon). This may not be ideal since one has to activate the window before operating on it. Thanks to pLum0 ([email protected]), we can make a script using xdotool to fix this.

    Check here: https://askubuntu.com/questions/1400834/how-to-snap-minimize-maximize-window-below-cursor

    Fix horizontal scrolling

    In case you are facing issues with horizontal scroll (thumb scroll), you may try this fix by Joren Miner ([email protected]). Place the snippet below on the same level as “smartshift” or “hiresscroll”:

    thumbwheel: {
        divert: false;
        invert: false;
    };