Category: WebDev

  • WordPress Heading Anchors: Automatically Create Link Targets

    WordPress Heading Anchors: Automatically Create Link Targets

    Ever struggled with creating easy navigation within your long WordPress posts? Or wished you could link directly to specific sections of your content? Creating WordPress heading anchors automatically can dramatically improve your content’s navigation. In this guide, I’ll show you how to transform your WordPress headings into clickable link targets, making it easier for readers to navigate through your posts and share specific sections.

    How to enable WordPress heading anchors generation?

    The easiest way to enable this functionality would be the theme’s functions.php. Simply head to your WordPress backend:

    1. In the sidebar, hover on ‘Tools’
    2. Click on ‘Theme File Editor’
    3. In the file explorer, click on functions.php
    4. Paste the code
    add_filter( 'block_editor_settings_all', function( $editor_settings, $editor_context ) {
    	$editor_settings['generateAnchors'] = true;
    	return $editor_settings;
    }, 10, 2 );

    The theme’s functions.php is overwritten after theme updates so it is ill-advised to add code there.

    To ensure your custom code survives theme updates, you should create a child theme . A child theme inherits all the features of your main theme while allowing you to make safe customizations.

    Alternatively, you can create a simple site-specific plugin – just create a new PHP file in the wp-content/plugins directory with the following code:

    <?php
    /*
    Plugin Name: Auto Heading Anchors
    Description: Enable automatic heading anchors in the block editor
    Version: 1.0
    */
    
    add_filter( 'block_editor_settings_all', function( $editor_settings, $editor_context ) {
       $editor_settings['generateAnchors'] = true;
       return $editor_settings;
    }, 10, 2 );

    Both methods will preserve your code during theme updates. If you already have a custom plugin for your site’s functionality, that’s the perfect place to add this code.

    How does anchor generation work?

    Jump directly to the results

    The feature to make automatic anchor generation opt-in was introduced in this pull request after WordPress 5.9. I find this feature super useful and would have wanted it to be opt-out or at least wish that there was a UI to enable/disable this.

    The core/heading block is where the magic happens. In the autogenerate-anchors.js file. The heading edit function detects content but no anchor, It calls generateAnchor(clientId, content) which uses getSlug() to transform the content:

    • Removes accents
    • Replaces non-alphanumeric chars with hyphens
    • Converts to lowercase
    • Trims leading/trailing hyphens

    The anchor becomes the slug unless there’s a duplicate, in which case it would add -1, -2, etc.

    Results

    After enabling autogenerated anchors, you can observe that WordPress automatically adds an ID to the headings. This can be used for various purposes.

    You can link to your heading anywhere in your content by using the anchor feature of blocks like paragraph, heading, and button.

    Adding link to heading in WordPress

    You can create also create table of contents for long posts.

    If you’re interested in learning more about the WordPress Block Editor, I shared some helpful tips at WordPress Meetup Saarland recently. Feel free to check out my notes from the talk here!

  • Block Editor Best Practices: WordPress Meetup Saarland

    Block Editor Best Practices: WordPress Meetup Saarland

    WordPress Meetups are always one of the best ways to meet like-minded people, teach people about WordPress, have amazing discussions, and bring more people to the wonderful community. I participated in the 3rd WordPress Meetup in Saarland on 5th September, this time as a speaker. I talked about probably the most controversial feature of WordPress, the Block Editor (also known as Gutenberg). The topic was mainly about Block Editor Best Practices – for users, designers, and developers.

    Recently, we revamped rtCamp‘s website. It was a mammoth task – custom blocks, patterns, templates, and what not. During the process, we discovered some pain points with the block editor and also figured out some best practices. This talk focused on the outcomes of the project.

    During the talk, I realized how much context-switching I needed to do. One of the members in the audience was an artist and had just installed WordPress. They wanted to know the powers of Gutenberg. On the other hand, one of the members of the audience, Fredric Döll has founded Digitenser Consulting, wanted to learn more about how to efficiently create for and with the block editor for their clients.

    Gutenberg is a very powerful tool but it is often misunderstood. It is also important to understand that for some sites, Gutenberg may not make sense. But for the sites where editorial experience is key, it is imperative that the website is planned really well. A robust plan helps with feasible designs which lead to a better overall developer experience.

    The next WordPress Meetup in Saarland will happen on 23.01.2025. If you’re around Saarbrücken at that time, feel free to drop your emails in the comment.

    Note: In the presentation, we discussed negative margins. Gutenberg does have support for negative margins; however, our discussion was more oriented towards user experience. Currrently, negative margins in Gutenberg, have a little UX situation.

    Block Editor Best Practices – Deck

    You can access the presentation slides (Google Slides) this link.

  • How to Cache POST Requests in Nginx

    How to Cache POST Requests in Nginx

    Caching can substantially reduce load times and bandwidth usage, thereby enhancing the overall user experience. It allows the application to store the results of expensive database queries or API calls, enabling instant serving of cached data instead of re-computing or fetching it from the source each time. In this tutorial, we will explore why and how to cache POST requests in Nginx.

    There are only two hard things in Computer Science: cache invalidation and naming things.

    — Phil Karlton

    Caching POST requests: potential hazards

    By default, POST requests cannot be cached. Their (usually) non-idempotent (or “non-safe”) nature can lead to undesired and unexpected consequences when cached. Sensitive data, like passwords, which these requests may contain, risk exposure to other users and potential threats when cached. Additionally, POST requests often carry large payloads, such as file uploads, which can significantly consume memory or storage resources when stored. These potential hazards are the reasons why caching POST requests is not generally advised.

    Source: https://restfulapi.net/idempotent-rest-apis/

    Although it may not be a good idea to cache POST requests, RFC 2616 allows POST methods to be cached provided the response includes appropriate Cache-Control or Expires header fields.

    The question: why would you want to cache a POST request?

    The decision to cache a POST request typically depends on the impact of the POST request on the server. If the POST request can trigger side effects on the server beyond just resource creation, it should not be cached. However, a POST request can also be idempotent/safe in nature. In such instances, caching is considered safe.

    Why and how to cache POST requests

    Recently, while working on a project, I found myself designing a simple fallback mechanism to ensure responses to requests even when the backend was offline. The request itself had no side effects, though the returned data might change infrequently. Thus, using caching made sense.

    I did not want to use Redis for two reasons:

    1. I wanted to keep the approach simple, without involving ‘too many’ moving parts.
    2. Redis does not automatically serve stale cache data when the cache expires or is evicted (invalidate-on-expire).

    As we were using Nginx, I decided to go ahead with this approach (see figure).

    The frontend makes a POST request to the server, which has an Nginx set up as a reverse proxy. While the services are up and running, Nginx caches them for a certain time and in a case where the services are down, Nginx will serve the cache (even if it is stale) from its store.

    http {
        ...
        # Define cache zone
        proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my-cache:20m max_size=1g inactive=3h use_temp_path=off;
        ...
    }
    
    location /cache/me {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://service:3000;
    
        # Use cache zone defined in http
        proxy_cache my-cache;
        proxy_cache_lock on;
        
        # Cache for 3h if the status code is 200/201/302
        proxy_cache_valid 200 201 302 3h;
        
        # Serve staled cached responses
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_methods POST;
    
        # ! This is important
        proxy_cache_key "$request_uri|$request_body";
    
        proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
    
        # Add header to the response
        add_header X-Cached $upstream_cache_status;
    }

    Things to consider

    In proxy_cache_key "$request_uri|$request_body", we are using the request URI as well as the body as an identifier for the cached response. This was important in my case as the request (payload) and response contained sensitive information. We needed to ensure that the response is cached on per-user basis. This, however, comes with a few implications:

    1. Saving the request body may cause a downgrade in performance (if the request body is large).
    2. Increased memory/storage usage.
    3. Even if the request body is slightly different, it will cause Nginx to cache a new response. This may cause redundancy and data mismatch.

    Conclusion

    Caching POST requests in Nginx may offer a viable solution for enhancing application performance. Despite the inherent risks associated with caching such requests, careful implementation can make this approach both safe and effective. This tutorial discusses how we can implement POST request caching wisely.

    Want to know how we can monitor server logs like a pro, using Grafana Loki?

    Suggested Readings

    1. Idempotent and Safe APIs
    2. Nginx Proxy Module
    3. Caching POST Requests with Varnish
  • Protected Routes in Next.js

    Protected Routes in Next.js

    If you are building a SaaS website that has awesome features or a simple website with minimal user functionality, you know Authentication and Authorization are crucial (difference between authentication and authorization). Protected Routes in Next.js help us ensure that unauthenticated users are not able to see routes/pages intended for logged in (authenticated) users. There are a few approaches to to implement Protected Routes in Next.js, i.e., enforce authentication for a page/route.

    But, first of all – why do we love Next.js? Next.js is arguably the most popular and go-to React framework. It packs some cool stuff including file-based routing, incremental static regeneration, and internationalization (i18n). With Next.js 13, we have got even more power – layouts and Turbopack!

    You might be wondering – why bother protecting routes? We are building a SaaS product with a Next.js frontend and Nest.js backend. We have implemented authentication in the backend but we also need to ensure that forced browsing* is prevented and User Experience is enriched. Actual authentication logic should reside inside our back-end logic. All the API calls must be appropriately authenticated. In our app, whenever there is an unauthenticated request it returns 401 Unauthorized. An ACL is also in place so whenever user requests a resource they do not have access to, the backend returns 403 Forbidden.

    Now, let’s create a route protection flow in Next.js.:
    If a user requests a protected route (something that requires authentication), we redirect them to the login page.
    We should not prevent access if a route is public (supposed to be viewed regardless of the users’ authentication state) like a login page.

    At the end, the goals are simple: safety and security.

    Jodi Rell

    Using RouteGuard

    The concept of a RouteGuard is simple. It is a wrapper component that checks whether the user has access to the requested page on every route change. To track the access, we use one states: authorized. If authorized is true, then the user may see the page or else user is redirected to the login page. To update the state, we have a function authCheck() which prevents access (sets authorized to false) if the user does not have access and the page is not public (e.g. landing page, login page, sign-up page).

    Logic of RouteGuard to implement Protected Routes in Next.js.
    Working of RouteGuard
    import { Flex, Spinner } from '@chakra-ui/react';
    import { useRouter } from 'next/router';
    import publicPaths from '../data/publicPaths';
    import { useAppDispatch, useAppSelector } from '../hooks/storeHooks';
    import { setRedirectLink } from '../redux/AuthSlice';
    import {
      JSXElementConstructor,
      ReactElement,
      useEffect,
      useState,
    } from 'react';
    
    const RouteGuard = (props: {
      children: ReactElement<unknown, string | JSXElementConstructor<unknown>>;
    }) => {
      const { children } = props;
    
      const router = useRouter();
      const [authorized, setAuthorized] = useState(false);
      const user = useAppSelector((state) => state.auth);
    
      const dispatch = useAppDispatch();
    
      useEffect(() => {
        const authCheck = () => {
          if (
            !user.isLoggedIn &&
            !publicPaths.includes(router.asPath.split('?')[0])
          ) {
            setAuthorized(false);
            dispatch(setRedirectLink({ goto: router.asPath }));
            void router.push({
              pathname: '/login',
            });
          } else {
            setAuthorized(true);
          }
        };
    
        authCheck();
    
        const preventAccess = () => setAuthorized(false);
    
        router.events.on('routeChangeStart', preventAccess);
        router.events.on('routeChangeComplete', authCheck);
    
        return () => {
          router.events.off('routeChangeStart', preventAccess);
          router.events.off('routeChangeComplete', authCheck);
        };
      }, [dispatch, router, router.events, user]);
    
      return authorized ? (
        children
      ) : (
        <Flex h="100vh" w="100vw" justifyContent="center" alignItems="center">
          <Spinner size="xl" />
        </Flex>
      );
    };
    
    export default RouteGuard;

    Note: we are using Redux to store the user’s data; authentication is out of the scope of this blog post.

    Implementing the Middleware

    In a scenario where the users’ session expires while they are on a protected page, they will not be able to fetch newer resources (or perform any actions for that matter). That’s, once again, really bad UX. We cannot expect a user to refresh, so we need a way to let them know that their session is no longer valid.

    To implement the same, we will use another awesome Next.js feature – Middlewares! In few words, a middleware sits between your server and the frontend. Middleware allows you to run code before a request is completed, then based on the incoming request, you can modify the response by rewriting, redirecting, modifying the request or response headers, or responding directly.

    After session expiration, whenever the user makes a request, it will result in 401 Unauthorized. We have implemented a middleware which listens to the response for each request that is being made from the frontend; if the request results in 401 Unauthorized, we dispatch the same action, i.e. log out the user and redirect to the login page.

    Working of the unauthenticatedInterceptor middleware to implement Protected Routes in Next.js.
    Working of the middleware
    import {
      MiddlewareAPI,
      isRejectedWithValue,
      Middleware,
    } from '@reduxjs/toolkit';
    import { logout } from '../redux/AuthSlice';
    import { store } from '../redux/store';
    
    interface ActionType {
      type: string;
      payload: { status: number };
      meta: {};
      error: {};
    }
    
    const unauthenticatedInterceptor: Middleware =
      (_api: MiddlewareAPI) =>
      (next: (action: ActionType) => unknown) =>
      (action: ActionType) => {
        if (isRejectedWithValue(action)) {
          if (action.payload.status === 401 || action.payload.status === 403) {
            console.error('MIDDLEWARE: Unauthorized/Unauthenticated [Invalid token]');
            store.dispatch(logout());
          }
        }
    
        return next(action);
      };
    
    export default unauthenticatedInterceptor;

    Suggested Readings