I usually have to SSH into a lot of servers; personal servers and work-related. Remembering their hostnames or IPs has always been a task. I have tried a few apps like Termius, but they often come with their own set of drawbacks. Many of these solutions are paid, which can be a significant investment if you’re just looking for a simple way to manage your connections. Furthermore, they often require extensive setup and configuration, which can be time-consuming when you just want to quickly connect to your servers.
What I really needed was a lightweight, free solution that I could set up quickly and start using right away. I wanted something that would help me organize my SSH connections without the overhead of a full-featured (and often overpriced) application.
That’s why I decided to create my own solution: a simple npm package that addresses these exact pain points. My goal was to develop a tool that’s easy to install, requires minimal setup, and gets you connected to your servers with minimal fuss.
In this post, I’ll introduce you to this package and show you how it can simplify your SSH workflow without breaking the bank or requiring a considerable effort to set up.
Installing simple-sshc
Installing simple-sshc requires node version 14.0.0 or above to work. If you have not already, you can install node and npm here.
Once you have node and npm setup, run this command to install simple-sshc globally:
$ npm install -g simple-sshc
You can verify the installation using:
$ sshc version
sshc version 1.0.1
Connecting to a server
You can SSH into your saved hosts by simply invoking the sshc command:
Features
Adding connections
Easily add new SSH connections to your list with a simple interactive prompt:
$ sshc add
Enter the label: myserver
Username: user
Hostname (IP address): 192.168.1.100
The CLI guides you through the process, ensuring you don’t miss any crucial details. Once added, your connection is saved and ready for quick access.
List all connections
View all your saved connections at a glance:
$ sshc list
Modify existing connections
Need to update a connection? You can use sshc modify to do that.
$ sshc modify
? Select the connection to modify: myserver
? New username: newuser
? New hostname (IP address): 192.168.1.101
Remove connections
Cleaning up is just as easy:
$ sshc remove
? Select the connection you wish to remove: oldserver
? Are you sure you want to remove this connection? Yes
Caching can substantially reduce load times and bandwidth usage, thereby enhancing the overall user experience. It allows the application to store the results of expensive database queries or API calls, enabling instant serving of cached data instead of re-computing or fetching it from the source each time. In this tutorial, we will explore why and how to cache POST requests in Nginx.
There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton
Caching POST requests: potential hazards
By default, POST requests cannot be cached. Their (usually) non-idempotent (or “non-safe”) nature can lead to undesired and unexpected consequences when cached. Sensitive data, like passwords, which these requests may contain, risk exposure to other users and potential threats when cached. Additionally, POST requests often carry large payloads, such as file uploads, which can significantly consume memory or storage resources when stored. These potential hazards are the reasons why caching POST requests is not generally advised.
Although it may not be a good idea to cache POST requests, RFC 2616 allows POST methods to be cached provided the response includes appropriate Cache-Control or Expires header fields.
The question: why would you want to cache a POST request?
The decision to cache a POST request typically depends on the impact of the POST request on the server. If the POST request can trigger side effects on the server beyond just resource creation, it should not be cached. However, a POST request can also be idempotent/safe in nature. In such instances, caching is considered safe.
Why and how to cache POST requests
Recently, while working on a project, I found myself designing a simple fallback mechanism to ensure responses to requests even when the backend was offline. The request itself had no side effects, though the returned data might change infrequently. Thus, using caching made sense.
I did not want to use Redis for two reasons:
I wanted to keep the approach simple, without involving ‘too many’ moving parts.
Redis does not automatically serve stale cache data when the cache expires or is evicted (invalidate-on-expire).
As we were using Nginx, I decided to go ahead with this approach (see figure).
The frontend makes a POST request to the server, which has an Nginx set up as a reverse proxy. While the services are up and running, Nginx caches them for a certain time and in a case where the services are down, Nginx will serve the cache (even if it is stale) from its store.
http {
...
# Define cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my-cache:20m max_size=1g inactive=3h use_temp_path=off;
...
}
location /cache/me {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://service:3000;
# Use cache zone defined in http
proxy_cache my-cache;
proxy_cache_lock on;
# Cache for 3h if the status code is 200/201/302
proxy_cache_valid 200 201 302 3h;
# Serve staled cached responses
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
# ! This is important
proxy_cache_key "$request_uri|$request_body";
proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
# Add header to the response
add_header X-Cached $upstream_cache_status;
}
Things to consider
In proxy_cache_key "$request_uri|$request_body", we are using the request URI as well as the body as an identifier for the cached response. This was important in my case as the request (payload) and response contained sensitive information. We needed to ensure that the response is cached on per-user basis. This, however, comes with a few implications:
Saving the request body may cause a downgrade in performance (if the request body is large).
Increased memory/storage usage.
Even if the request body is slightly different, it will cause Nginx to cache a new response. This may cause redundancy and data mismatch.
Conclusion
Caching POST requests in Nginx may offer a viable solution for enhancing application performance. Despite the inherent risks associated with caching such requests, careful implementation can make this approach both safe and effective. This tutorial discusses how we can implement POST request caching wisely.
If you love working with servers, you must have wanted a beautiful and efficient way to monitor your server logs. Grafana Loki does just that! Loki is a log aggregation system that is horizontally-scalable and highly-available. Inspired by Prometheus, Loki integrates pretty well with Grafana and is an amazing tool to monitor your server logs. It is fast since it does not index the content of the logs, but rather labels each set of log stream. In this tutorial, we will discuss how to set up Grafana Loki and integrate it with Grafana Dashboard. We will also learn how to add Nginx logs to the explorer.
The ‘Grafana’ Plan
I want to monitor my personal server’s (let us call it the ‘source’) logs. My personal server is a Hetzner VPS and runs on Ubuntu Server 20.04 LTS. I plan to use an Amazon EC2 free-tier t2.micro instance to serve the Grafana Dashboard over HTTP. The source will run Grafana Loki inside a Docker container over port 3100. The dashboard will also have an Ubuntu Server 20.04 LTS.
Setting up the Grafana Dashboard
To serve our Grafana Dashboard, we will use an Amazon EC2 free-tier t2.micro instance running Ubuntu Server 20.04 LTS. Now, choosing the cloud service provider is completely upto you. You can also set up the dashboard locally or on a Raspberry Pi, if you have one. If you do not know how you can expose your Raspberry Pi to the public without a public IP, here is a guide for you. We do not need to do anything special, but make sure you allow access to port 80 (HTTP). Once it is done, connect to your instance via SSH.
Installing Grafana
Now, we need to install Grafana. We will install the Grafana Enterprise edition, but if you wish to go for the OSS release, you can follow this guide.
Now that Grafana is successfully installed, we need to start the service. To check if the Grafana service is loaded, we need to use the following command:
systemctl status grafana-server.service
We should see that the service is loaded. Let’s enable the service now.
systemctl enable grafana-server.service
Finally, let’s start the service.
systemctl start grafana-server.service
Let’s check if everything works well. In my case, I will navigate to 34.201.129.128:3000.
The default username is admin and password is admin.
Redirecting Port 80 to Grafana Port
We want to access Grafana Dashboard over HTTP but to bind Grafana to ports below 1024, we need to run grafana as root. Upon installation, Grafana creates a grafana user and the service runs under that user. We will redirect port 80 to 3000 using iptables to be able to access Grafana over HTTP.
Next step is to add a data source (Loki) to Grafana.
Setting up Grafana Loki
In order for Grafana to be able to fetch our server logs, we need to install Grafana Loki on our source. We will use Docker for this so make sure Docker is installed and working. First, let us pull the repository.
git clone https://github.com/grafana/loki.git
Now, let us cd into loki/production/ directory and pull the required images.
docker-compose pull
This will pull three images – loki, promtail, and grafana. We are ready to spin our containers.
docker-compose up
This will make Loki available via port 3100.
Adding Loki as a Data Source
We are ready with Grafana Dashboard and Loki. Now, we need to integrate them. Head to your Grafana Dashboard > Gear Icon âš™ > Data Source
Click on ‘Add Data Source’ and choose Loki under Logging and Document Databases. Now, we will configure the Data Source.
We are all set! We should now be able to explore our logs from the dashboard.
Exploring Logs
To explore the logs, click on the Explore option (?) in the sidebar. Click on the Data Source and you should be able to see the list of log files, select one to see the logs. You can also type in your query, for example: {filename="/var/log/syslog"} will yield the logs from syslog.
Exploring Nginx Logs
Loki will not store Nginx logs out of the box. We need to configure our deplyoment to do that. The default configuration for promtail is located at /etc/promtail/config.yml. To check it, we need to first initiate a shell session in our promtail container. We can get a list of running containers by docker ps. Copy promtail container’s id and run:
docker exec -it <container_id> bash
We will create our own configuration to access Nginx logs. Cd into /opt/loki/ on your host machine and create a new file – promtail-config.yml.
You should now be able to see access.log and error.log in your Grafana Explorer.
There you have it! We have successfully configured Grafana and Grafana Loki to monitor our server logs. We have also learnt to configure Nginx with Promtail to serve logs to Grafana.
What’s next?
Certainly, we want to create a centralized dashboard for all our logs as well as system metrics like Disk and CPU usage. In future, I will discuss how we can add Prometheus to Grafana and monitor our system metrics.