How does browser fingerprinting work?

What is browser fingerprinting?

Nowadays, most people are familiar with cookies – information stored in browsers that helps websites keep track of our settings and track our actions online. Many users know how to delete cookies or navigate in private mode hoping that this will grant them privacy and prevent companies from following their actions online.

However, there is a technique that allows to track the user online without the need for cookies called browser fingerprinting. One might think that just based on some general system settings it would be hard to tell computers from one another, but the blend of our systems’ settings makes them quite unique and it’s usually possible to tell them apart.

Browsers automatically send many bits of information about a machine whenever they make a petition to the website server.  These technical non user-specific details usually help with communication between the browser and the server, such as serving a correct version of the website for specific browser. One of the most common pieces of information shared is the user agent. For example, user agent for a Windows 10 machine that uses Edge browser would looks something like: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246‘. This string identifies the operating system and the specific version of the browser. This information is automatically sent in the headers when a website is loaded.

Other pieces of information that the browser can access are: screen size, timezone, system fonts, language, video and audio hardware available etc. Fingerprinting can also perform operations with audio or video hardware to create unique hashes. Using a protocol called WebRTC it can even be possible to discover user’s IP behind a VPN. Even though fingerprinting doesn’t contain any personally identifiable information, it allows to identify users browsing on different website without his knowledge or consent.

Can browser fingerprinting be prevented?

Most browsers can be configured to make fingerprinting much harder, but it’s hard to prevent it completely, since browsers might need access to many of the functions that make fingerprinting possible just to display websites correctly or user advanced functionalities like videochats.

If you’re interested in fingerprinting you can check out: This website will tell you which techniques can be used to identify your browser and how unique your signature is. It’s quite eye-opening to see what information about your computer is available every time you access a website and how unique it is.


Difference between virtual machines and containers


Virtualization and containerization might seek to achieve the same goals but they are different concepts.

The virtual machine loads an OS on its own it can be any type of OS that you would like. The virtual machine has its own hard drive, network interfaces etc. that are then mapped to the physical hardware of your machine. So you can easily run a Windows or macOS machine on you Ubuntu if you’d like to. Virtualization however is different from emulation, so you’re still limited to the hardware that you have available.

Containerization is the process of isolating the application dependencies into a container. It uses kernel functionalities (like control groups and namespaces) that allow for resource isolation. Since you’re not running a full guest OS you won’t be able to run a Windows container on a Linux machine and vice versa. Docker didn’t invent the concept of containers, but is the most popular solution at the moment and when people are talking about containers they usually refer to this solution.

Software required

In order to run a virtual machine on your computer you need a hypervisor. As per Wikipedia a hypervisor is computer software, firmware or hardware that creates and runs virtual machines. There are two different types of hypervisors: type-1 or native ones run directly on the computer hardware while type-2 or hosted ones run on the host OS as a process. Examples of hypervisors a Microsoft Hyper-V, VirtualBox or VMware Player.

The docker containers on the other hand are executed with the Docker engine that runs on the host OS.

Resource isolation

The virtual machines  operate within a sandbox, meaning that they are completely isolated and self-contained. The containers share the operating system while virtual machines run an OS on their own. Resource isolation is one of the advantages that virtual machines have over containers.

Resources consumption

Most companies choose Docker over VMs because of much smaller resource footprint. Virtual machines might also need to have the resources assigned to them permanently while they’re operating, containers allow for more flexibility and optimization.

There is also less redundancy when using containers – you don’t have to install the full guest operating system, just components that you need for your application.

Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. The guest OS has its own memory management and virtual device drivers.  As a result containers are smaller that VMs and have superior startup times than virtual machines. In many cases you might double the number of applications run and the speed when using containers. However, this might not be true in all situations. Current technology of virtualization reduced the overhead of CPU and memory usage and in some configuration a virtual machine might run faster than a Docker container [1].

The biggest advantage of containers is that you can create a lightweight, portable and above all consistent operating environment when you’re developing, testing and deploying applications.

Coding trivia 1

Random interesting stuff that I learned last week

ER diagrams

Entity Relationship diagrams are important concept in database design. They help to conceptualize the relationship between tables in a database. I was looking for a quick refresher on the ER diagrams. These two videos did the job: Part 1 and Part 2

Unix sockets

You know when you’re trying to connect to MySQL and an error shows up that the socket file is missing? That’s because in MySQL the socket connection is enabled by default on localhost. What are UNIX socket files? These are files that enable two application to communicate with each other on the same machine, without the need for networking.
Here is an in-depth explanation of how they work:
And here’s a quick video explanation:
More on the details of sockets:

The sticky bit

In some cases you might want to set a folder to be writable to the whole world, bu limit the scope of changes that every user could do to the files. Sticky bit limits the activity in the folder so that only the owners of the files the overall directory or root user can rename or delete . This means that users won’t be able to mess with other user’s files  and makes sense to set it in collaborative directories.

Adding it is easy with:
chmod +t test;

Wikipedia has a good explanation of the concept

Creating a custom keyboard layout in Linux

Sometimes you might want to modify the keyboard layout, for example to add non standard characters that you’re often using or if you’re writing in 2 different languages you might combine all their special characters in one keyboard layout. In my case I bought a keyboard that had the escape key mapped by default to the Home Page special key and I found irritating having to press the Fn key every time I needed to use the escape button.

I’m using the Lubuntu Linux distribution that uses the X keyboard extension to enable multiple keyboard layouts.

First, run the xev command and filter only the keyboard events for a clearer log:

xev -event keyboard

When pressing a key it shows the keycode that is being sent and also the symbol of they key that is interpreted. In my case, using the standard US layout, the key 180 maps to the X86HomePage. The xev tool logs both the KeyPress and KeyRelease event:

With this information we can look for the key symbol that we can use to modify the layout. In the directory “/usr/share/X11/xkb/keycodes” we can find the mapping in the evdev file.

cat /usr/share/X11/xkb/keycodes/evdev | grep 180

The output shows that the keycode 180 is mapped to the symbol <I180>:

 <I180> = 180;   // #define KEY_HOMEPAGE            172

Now, there is an easy way to map the key to the escape button:

xmodmap -e "keycode 180 = Escape"

To make the changes persistent we can save the command in the ~/.Xmodmap file

However, we than can also define the layout in the “/usr/share/X11/xkb/symbols” folder

The easiest way to go about it is to copy the one of the files that you can build upon. For example the standard US layout and then remove the contents of the layout.

One of the layouts marked with default will be the main one and it also can have other variants.

default  partial alphanumeric_keys modifier_keys
xkb_symbols "basic" {

    name[Group1]= "Custom Keyboard Layout";

    include "us(basic)";
    include "eurosign(5)";

    key <I180> {        [ Escape                        ]        };

In the code we’re first importing the US layout, then adding the Euro sign on the 5 button and finally mapping the I180 key to the Escape button. We can save the file as “cust” for example.

After we save the file we should enable the configuration in the rules folder base.lst file, otherwise the configuration will not be available.

We’re looking for the layout section and adding the name of the file with the description of the Layout:

! layout
  cust           Custom keyboard layout

And with this the layout should be one of the choices in the layout selector.

Setting up the fixture in PHPUnit

When running our tests we should begin with a know stage, called the fixture, and for that we might need to initialize variables etc. And after we finish the tests we might have some cleaning up to do.

If you need to have custom code run before you execute them. PHPUnit allows you to configure a specific file to be run before the test execution.

This directive in your phpunit.xml configuration file will execute the bootstrap.php fiel before the tests: :

<phpunit bootstrap="tests/bootstrap.php" >
    <!-- ... -->

The bootstrap file can also be specified in the command line argument:

phpunit --bootstrap bootstrap.php

If every method is the class needs the same setup we might use provided functions to save ourselves from repeating the code.

PHPUnit has setUp() and tearDown methods that run before and after every method. We might use these for example to initialize variables.

Additionally, setUpBeforeClass() and tearDownAfterClass methods are being run before any method in the class has been executed and after all of them finish, respectively. These might be useful to setting up for example a database connection. However, one has to keep in mind that that when tests share a fixture and are not fully independent it makes them harder to debug.

Setting up WordPress development environment with Docker

One of the biggest challenges in web development is to have a stable development environment and assuring that the website works when deploying on different servers.

Docker helps us build isolated containers that give a this stable and predictable environment and saves many headaches when debugging our website. Thanks to the preconfigured images available on Docker Hub and the Docker compose it’s a breeze to set up a development environment.

This post assumes that you already have Docker and Docker compose isntalled on your system. On Linux Docker Compose doesn’t come installed with Docker so make sure you install it separately.

Create a directory to put the docker compose file into and then paste the following into the docker-compose file:

version: '3.3'

     image: mysql:5.7
     # declaring the volumes so we have persistent data
       - wp_data:/var/lib/mysql
     restart: always
       MYSQL_ROOT_PASSWORD: password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

     container_name: docker_wordpress_dev
       - db
     # using the latest wordpress image
     # Docker will automatically look it up on Docker Hub
     image: wordpress:latest
     # binding the loopback port 8000 on the host to port 80 on the docker container
       - ""
     restart: always
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
# we need to declare the volumes that we will use

After running the “docker-compose up -d” command you should be able to access WordPress on by entering localhost:8000 in your browser. This way you should should be able to bring up the installation page of WordPress:

After completing the setup steps you’re ready to start working with WordPress.

If you’d like to access the source code you can the following command, which will bring you to the main folder with your WordPress installation:

docker exec -it docker_wordpress_dev bash

The database volume persists after the container is stopped. You can check that running the command “docker-compose stop” to stop the containers. Whenever you reinitialize the container you will be able to pick up where you left, so whatever changes you mad to your WordPress installation in Docker will persist. Also the database will persist in the wp_data volume.

Thanks to the persisting database volume, even if you remove the container you should be able to spin the WordPress site back up from the same image.

Downloading and installing custom Linux kernels

You might want to install a different kernel to which you have supplied with your current version of Linux. This might be because another version might work better with your hardware.

The Linux kernel is the core of the operating system that facilitates interaction between the hardware components and software.

You can install a different version of the kernel with the apt command:

sudo apt install linux-image

The command will present you with several options and you need to type the specific one you would like to install.

After installing the kernel you will be able to choose an additional option from the Linux boot menu.

In many cases the version you would like to install is not available for install from the command line and you would need to compile is from the source.

You need to go to to to download the deb packages for the version you would like to try.

After downloading the correct files for you version and architecture you need to install them with the following command and the reboot your system:

sudo dpkg -i linux*.deb

In order to be able to boot, you might need to sign the binaries, disable validation for the kernel binary or disable Secure Boot altogether in BIOS.

Setting custom PHP code sniffing rules

Code sniffing is the practice of checking the code for compliance with some pre-determined standards. It’s very important to run your code through a sniffer because it helps with maintenance and ensures better code quality.

The most common standards in PHP are PSR-0, PSR-1, PSR2 and PEAR. These rules might specify details such as depth and type of indentation, placement of braces, spacing, variables names etc. Depending on the standard, they might go into a lot of detail about how the structure of your code should look like.

A popular tool for sniffing code in PHP is the PHP CodeSniffer (phpcs).

When running it it you can add arguments that specify the folder with the files, ignored directories, coding standards etc. When working on different projects for different clients it might be tiresome to have to specify them differently for every sniff. Phpcs lets you create an archive that specify custom rules and arguments to use. Whenever you run phps it will first look into the current directory for the file and if found, apply the configuration.

The file should be be in an xml format. The most useful options that you can specify are:

The file tag adds files and folders to be sniffed.


The “exclude-pattern” tag lets you exclude specific directories or files from sniffing:


The “rule” tag lets you add ruleset by its name and also exclude specific rules if you would not like to apply them:

<rule ref="PEAR">
   <exclude name="PEAR.NamingConventions.ValidFunctionName"/>

The arg tag lets you specify arguments that should be passed to the phpcs executable, just like the arguments from the command line. The names and acceptable values are sames as on the CLI, with a couple of arguments not permitted to be used there.

 <arg name="basepath" value="."/>

Using custom ruleset files is very useful when you would like to track the quality of your code and your projects must conform to different standard. Defining them before starting a project will surely help you produce cleaner code.

How to deploy a React app in a subdirectory on a WordPress site

The case here is pretty specific, but maybe somebody will be in the situation like me. I have a WordPress site and I wanted to deploy a React app to include in my portfolio and serve it from a subdirectory.

When deploying a React app in a website subdirectory there are some hurdles to overcome. First of all, WordPress redirects most requests to the main index.php file, so when you’re trying to load the app the server might direct the request there and you will get an error. To resolve this you need to edit the .htaccess file so that the requests are correctly routed. Below is the code that you should add in your access file , before the rewrite conditions of WordPress:

RewriteCond %{REQUEST_URI} ^/subdirectory/.*
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^.*$ /subdirectory/index.html [L]

If the request URL will start wit the subdirectory and is not an existing file (RewriteCond rules), the request will be routed to the main index file (RewriteRule). The [L] flag tells the server not to process any more rules – otherwise it will execute the following ones as well.

Secondly, the React Router has a handy “basename” attribute that specifies that base of the URLs.

<Router basename="/subdirectory" >                
  <Route path="/my-account" component={Form} />
 <Route path="/about" component={About} />

If we don’t include the attribute the paths would be relative to the main directory and would break. With the baseame, they will all include the base path.

And that’s pretty much it. With those 2 simple tweaks you can make your React app work from a subdirectory on a WordPress site.

Under the URL you can find a React app that I built.

What are .htaccess files?

If you were working with WordPress you probably noticed that the installation includes a .htaccess file in its main directory.

Whate are these files?

The .htaccess file set specific directives for the server that should be executed when an Apache server is handling requests. In the default case of WordPress it makes the server point most of the requests to the main index.php file, which then loads all the necessary scripts. The server needs to be configured to look for and honor the directives included there. That can be achieved using the AllowOverride directive that is specified by by directory. Access files allow you to set different directives per directory without the need to touch the main server configuration.


<Directory "/path/from/where/the/files/are/served">
    AllowOverride All

In this case the server will include all the directives that are premitted to be set there.

Other options are:

  • “None” – the server will not even look for .htaccess files
  • directive type – one or more of the directive groupings such as “AuthConfig” or “FileInfo”

The configuration is applied to the directory where the access file is found and all subdirectories. However, subdirectories might contain their own access file that override configuration from higher up, because it’s being applied in the order that the files are found.

The default name of the access files is .htaccess, but this can be changes to something else using the AccessFileName directive:

AccessFileName ".config"