6 PostCSS plugins to help you migrate from SCSS

What is PostCSS

PostCSS is a tool for transforming CSS with JavaScript. Rather than defining a new CSS language like SCSS and Less, PostCSS allows you to add features by installing plugins. You're free to write CSS anyway way you want as long as the plugins can transform it into CSS. Being able to choose which feature you want is also great for performance. This is amazing for people who are tired of how the frontend ecosystem is always churning. You can stick to plain CSS as much as you like and only add the features you want. Making it easier to migrate if you ever want to; or need to. PostCSS already has a very rich ecosystem of plugins to choose from. Some amazing ones are cssnano for compressing CSS and present-env to convert modern CSS into more compatible versions.

Here I'll share with you some plugins I used when migrating my SCSS code over to PostCSS. Over time however, my plugins have evolved into something different and more productive for me. I even wrote my own spacing plugin too. The migration was definitely worth it.

Variables

Native CSS variables can be defined in modern browsers like so:


:root {
	--font-stack: Helvetica, sans-serif;
}

However there are some limitations. They need to be defined in the :root selector and they can't be used in media queries. SCSS variables don't have the same limitations and works almost like a variable in a programming language. You can define variables like so:


$font-stack: Helvetica, sans-serif;

For those who prefer SCSS variables you can use postcss-simple-vars.

Nesting

Nesting is just too good to ignore. It makes your code more structured and easier to read. Makes you wonder why CSS hasn't implemented nesting earlier. If you want a feature compatible nesting plugin, use postcss-nested.

I'd also like to recommend an alternative plugin postcss-nesting. Instead of being SCSS compatible, it strives to implement the proposed CSS Nesting Module. There's high change that your CSS syntax will eventually be implemented in browsers and negate the use of the plugin. The difference in syntax is also very minor.

SCSS:


ul {
	li {}
}

CSS:


ul {
	& li {}
}

Notice the &? CSS specification requires the & for every nested selector. So far I've yet to find any issue with this. It is a good permanent replacement for SCSS nesting.

Partials

Partials allows you to split and organize your styles, making them easier to maintain. The best plugin for this is postcss-import. While it might not be a perfect replacement, it only requires a small tweak to the syntax to get it working. It is also compatible with native CSS.

SCSS:

@use '_partial.scss'

CSS:

@import 'mixins.css'

There's one advantage to postcss-import over SCSS @use. postcss-import is customizable. For example you can set addModulesDirectories to include directories to search for.

Mixins

Mixin lets you make groups of CSS declarations that can be reused. They work almost the same way as functions. Some potential use case could be added vendor prefixes or having dynamic styles depending on input. The plugin I would recommend is postcss-mixins. Although it is not perfect replacement, it does provide some advantage over SCSS. If you want a perfect replacement, you might want to check out an archived project postcss-sassy-mixins.

SCSS:


@mixin transform($property) {
	-webkit-transform: $property;
	-ms-transform: $property;
	transform: $property;
}
.box { @include transform(rotate(30deg)); }

CSS:


@define-mixin transform $property {
	-webkit-transform: $property;
	-ms-transform: $property;
	transform: $property;
}
.box { @mixin transform(rotate(30deg)); }

As you can see, only small changes are necessary.

Inheritance

Inheritance is a great feature to keep your code DRY. By using the @extend keyword you can copy the properties of one selector to another. You can also define placeholder if you do not want to create a new selector. The best plugin for inheritance is postcss-extend. It works very similarly to SCSS with some caveats. The author of this plugin sacrificed some features to make it safer and cleaner e.g. it does not allow you to extend into selector sequences.

Although a plugin exists for inheritance, it might not be the best idea to use it as it is likely to introduce bugs. That's why it's recommended to only use placeholders to extend. If possible, use a different pattern like @mixin.

Operators

Operators allows you to perform math in your styles. This is useful when composing layouts which which depend on each other e.g. grids. This is different from calc() which is calculated in the browser instead of precompiling them.

The closest plugin to achieve this is math-calc. But with the introduction of calc(), this plugin might not be necessary.

Bonus

precss tries to achieve SCSS as much as possible. It uses a few other plugins under the hood too. The drawback of using this plugin is that you'll miss out on the flexibility of choosing which features you want.

CSS Module transforms selectors so they become scoped locally. No more conflicting selectors!

stylelint is a CSS linter. It supports CSS syntax and CSS-like syntax.

tailwind is a utility-first CSS framework. Instead of providing out-of-the-box styles for different components, it provides low-level utility classes that let you build completely custom designs without ever leaving your HTML.

More plugins is a searchable catalog of PostCSS plugins. It's not as good as Google, but it provides a quick way to see what popular plugins are available.

Keywords: postcss, css, scss, less, plugins

6 tips to help you learn new source code faster

Memorize the folder structure

This is probably the easiest and fastest thing to get you started. Source code is usually organized logically. By studying the folder structure, you quickly get a quick overview of how the programmer conceptualize the project. There are some popular folder structures that you might recognize. They will give you a hint on where you can look for.

Start from the entry point

Find the function where everything starts. Different languages have different conventions. For example in C and Java they start in the main() function. For node.js you'll look at package.json. For PHP it might be index.php. Once you've found the entry point, you can start tracing the program flow. Look at what get initiated, or how the different modules are related. Another starting point might be the request handler or event handler. That's where user inputs get processed. Tracing the flow there will help you map the code structure with the behaviour your get when you use the program.

Read the documentations

Don't skip the documentations! Some people just like to wing it or Google it when they run into problem. Reading the documentations first not only reduce the need to find help, it is a quick way to glance over the high level concepts, and to dive into technical details. The author has put a lot of effort into distilling their knowledge into something that can be understood easily by new comers. They often include complex logic that would take a much longer time to understand by code.

Read the test cases

Reading the test cases is very similar to tracing from the entry and reading the documentations. It provides a concise manner to see how the code is used and how it is supposed to work. One downside is the extra fluff that's need to run the test cases.

Use an IDE

Being able to quickly read the method documentations, view the file outline, and jump directly to definitions can greatly speed up the exploration process. If you're a text editor kind of guy or you've never used an IDE before, you should definitely give it a go.

Write comments

Before you say "code should be self-documenting", hear me out. If the code is under version control (which it should be), then it doesn't matter what kind of monstrosity you do to it. You can always revert the changes. When reading through source code, use comments like how you would take notes on a textbook. No one might ever see it. You might be able to help improve the existing documentations too.

How to generate CSS spacing utility with PostCSS

What is PostCSS

PostCSS is a tool for transforming CSS with JavaScript. It's like creating your own language for CSS. It unlocks limitless possibilities with what you can do with CSS.

What we're trying to achieve

We're going to recreate the Bootstrap Spacing Utility. In a nutshell, it will generate classes for margins and padding at different directions and sizes. However, I'll be using different sizes. Here's a sample of the code that will be generated:


.mt-1 {
	margin-top: 0.25rem;
}
px-2 {
	padding-left: 0.5rem;
	padding-right: 0.5rem;
}

Because I prefer not to introduce new syntax, the plugin would look for a comment with a special string and append the new classes.

The code

Let's create a file for our plugin. I'll just call mine plugin.js.


const postcss = require('postcss')

module.exports = postcss.plugin('postcss-custom', function () {
	return function (root) {
		root.walkComments(function (comment) {
			if (comment.text && comment.text.indexOf('@spacing') >= 0) {
				spacing(comment)
			}
		})
	}
})

function spacing(comment) {
	const unit = 0.25
	const directions = {
		// placed inorder so that they can overwrite correctly
		t: ['top'],
		r: ['right'],
		b: ['bottom'],
		l: ['left'],
		x: ['left', 'right'],
		y: ['top', 'bottom'],
		'': [],
	}
	function generate(type, alias, direction, props, multiplier) {
		const postfix = multiplier < 0 ? 'n' + Math.abs(multiplier) : multiplier
		const rule = postcss.rule({ selector: `.${alias}${direction}-${postfix}` })
		if (direction === '') {
			rule.append(postcss.decl({
				prop: type,
				value: (multiplier * unit) + 'rem'
			}))
		} else {
			props.map((cur) => {
				rule.append(postcss.decl({
					prop: `${type}-${cur}`,
					value: (multiplier * unit) + 'rem'
				}))
			})
		}
		comment.after(rule)
	}

	for (let dir in directions) {
		for (let i = -2; i <= 8; i++) {
			generate('margin', 'm', dir, directions[dir], i)
			if (i >= 0) {
				generate('padding', 'p', dir, directions[dir], i)
			}
		}
	}
}

To add this plugin to PostCSS, simply add a new entry with the path to the JS file e.g. styles/plugins/my-plugin.js

In the plugin file we need to export a default object for our plugin. When you define the plugin with postcss.plugin(), you need to give it a name. In this case it's postcss-custom.


module.exports = postcss.plugin('postcss-custom', function () {})

The above function will search the document for all comments with the special text @spacing. Then it calls spacing() to generate the classes.

The important code to note in the comment() function is in the generate() function. postcss.rule() will create a rule with the selector given. Then we add new property to that selector with .append(). Finally we will append that selector after the comment with .after().

As you can see, creating a new PostCSS is quite easy.

Keywords: postcss, js, css, tutorial

Are online freelancing platforms a source of cheap labour

Freelancing platforms are great. You can find work from all over the world. Work whenever your feel like it. Charge the rate you want. Right? Perhaps those who have tried freelancing platforms would already know what the earning potential of freelancing platforms are. There are definitely plenty of jobs to choose from. And there's no doubt you can earn money from it. But unless your cost of living is well below $1k, you're better of elsewhere.

The global market

Do you remember what happened when China opened it's door to foreign markets? Remember how everything became cheaper and companies moved their manufacturing to China. That's what happens when you compete in the global market. You're not just competing on skills, you're competing on price. There are plenty of skilled workers in developing country who would gladly accept $1k per month for a high skilled job. And there's very little reason why anyone would choose to pay more for someone who lives in a developed country.

You might think that buyers would value quality over price. It's better to pay a little more for better results. Unfortunately buyers are often horrible at judging quality. When you buy a house, you'll never evaluate the materials used in the building, or the structural integrity of the building etc. And even if you tried, you wouldn't be very good at it either. The same problem plagues freelancing platforms.

Like a dating app

If only it was just about price and quality. There's a big problem when networks get large and big tech tries to optimize them. They start ranking people like how they rank products you buy online. Like dating apps, your profile will be given priority if you have a better ranking than others. That means newcomers will have a herculean task just landing their first job.

And like dating apps, the only way a buyer can judge you is by your profile. Something superficial and not representative of your real skills. There's not way for you to truly showcase yourself.

The feedback loop

Because of these two simple problems, the problems get worse. Buyers prefer cheap labour. Cheap labour get better rating and get priority. New comers lower their rates to get job. With the influx of skilled and cheap labour, there is no incentive for buyers to pay more.

Should you use a freelancing platforms

Although it sounds all doom and gloom, there's still some hope. Just like how some would pay double for a Mac Book, there will be some who would pay more for the service they like. It is still a good way to get access to decent jobs once in a while. Perhaps it's worth a try.

Keywords: freelance, job, work

Harnessing the power of Webpack for Django development

Webpack has become the de facto JS build tool. It is used in almost every front end project; whether it's a SPA written in React or a HTML site styled with SCSS. Because of this, backend developers who work with Django or Laravel cannot escape Webpack. If you can't beat them, join them.

How can Webpack help?

With Webpack we can automatically reload the page when our Django code changes. Similarly, we can also take advantage of Webpack Hot Module Replacement to hot reload our CSS and JS files. We can achieve that by using the Webpack Dev Server as a proxy for our Django server. The Webpack dev server will handle all the frontend request and pass the rest to Django.

The Setup

Let's install all the dependencies for this to work.


npm i -D webpack webpack-cli webpack-dev-server

In your Webpack config add the following:


devServer: {
	contentBase: 'path/to/static',
	host: 'localhost',
	port: 8000,
	writeToDisk: true, // optional
	hot: true,
	index: '', // to proxy root
	proxy: { // proxy everything
		context: () => true,
		target: 'http://localhost', // address of Django server
	},
	// reload for our django files
	before(app, server) {
		const chokidar = require('chokidar') // this is dependencies used by webpack
		chokidar.watch('path/to/django/files').on('all', function(event, path) {
			server.sockWrite(server.sockets, 'content-changed');
		})
	},
}

Set contentBase to point to your Django static directory. This is where Webpack will serve all the static content and client code.

The proxy option will send all request it cannot handle to your Django server, while serving the CSS and JS files themselves. This is how we achieve HMR.

In the before config, we will watch any Django files that we want to trigger a full page reload. A good candidate would be the template files.

Run the following command to start the dev server:


webpack-dev-server --config ./webpack.config.js --mode development

You might also want to add a command to your npm scripts for easy access.

New Levels of Productivity

Now you can edit your JS, CSS, HTML, and Python files and have them reload automatically. We achieved that with only one Webpack config and no change to our work flow.

Keywords: django, webpack

Easiest way to reload webpack dev server when other files change

Using webpack dev server with hot module reload for your frontend is very convenient and productive. But if a file is not a dependency in webpack, it will not reload. This can be a hassle if you're editing HTML files or developing alongside a backend like Django or Laravel, and would like the page to refresh when a file changes. This can be accomplish with a just a few lines of code.

Add this to your webpack config:


// webpack config
devServer: {
	before(app, server) {
		const chokidar = require('chokidar');
		chokidar.watch('path/to/my-files/').on('all', function (event, path) {
			server.sockWrite(server.sockets, 'content-changed');
		});
	},
}

What does this code do?

The before() function will be run by webpack when starting the dev server. It allows you to add your own middleware etc. require('chokidar') imports a file watching library used by webpack. It is similar to nodejs fs.watch() but with some nice functionality added. You're free to use a different library if you want. Next, we ask chokidar to watch the some files. It accepts a wide variety of inputs such as globs and arrays. Each time a file changes, we will tell webpack that a content has changed and it will reload the page for us.

That was surprisingly easy! Yet there is almost no documentation anywhere, and even some plugins solving this problem!

Keywords: webpack, dev, server, hot, reload, backend, file, change

Comparison of Golang Input Validator Libraries

This is a comparison of some open source golang validation libraries on github. These libraries do not handle data sanitization. Each libraries will have it's strengths and weakness. Let's get started.

go-playground/validator

go-playground/validator implements value validations for structs and individual fields based on tags. It's probably the oldest and most popular validation library out there. It supports complex validation methods such as cross-field comparison, i18n, diving into fields. This library would be great if you have complex validation requirements.

asaskevich/govalidator

asaskevich/govalidator is a package of validators and sanitizers for strings, structs and collections, and is based on validator.js. Similar to go-playground/validator, govalidator uses struct tags to define rules. It also provides individual validation functions and is a little easier to learn. This is a great alternative if you like mature and popular libraries.

go-ozzo/ozzo-validation

go-ozzo/ozzo-validation provides configurable and extensible data validation capabilities. Unlike the previous two libraries, ozzo-validation uses programming constructs instead of struct tags to define rules. This makes it less error-prone and more readable than the other two.

AgentCosmic/xvalid

AgentCosmic/xvalid is a lightweight validation library that can export rules as JSON so browsers can apply the same rules. xvalid is similar to ozzo-validation in that it uses programming constructs instead of struct tags to define rules. Being able to export rules to the browser will help you keep validation rules consistent. This is a great library if you only need simple validation.

Keywords: golang, library, validator, validation, open source, comparison, alternative

Increase Golang web app testing speed by over 2 times with parallel testing and more

We're all familiar with how tests get slower as we add more over time. If you're not already using parallel testing in Go, this article will give you an example of how to implement it in a web application. In one of my projects, I managed to decrease test time from 40s to just over 10s by employing these techniques.

The Solution

The easiest way to get major speed improvements is to add t.Parallel() to every top level test function.


func TestMyFunc(t *testing.T) {
	t.Parallel() // add this line
	... testing code
}

There's a pretty high chance that your test might break. The biggest culprit is having have singletons in your code. If any test uses a shared state, whether in a singleton or not, you cannot make that particular test parallel. Other tests can still be parallel. Some other obstacles could be caused by bad testing practice such as relying on previous test states for the next test, or using the same rows in a table etc. You can solve this by ensuring each test are designed to run in isolation.

Other Techniques

Another simple way to decrease test run time is simply not cleaning up the tests every run. That means not deleting from the database etc. If your tests are designed in isolation, this will be an easy change. The other benefit from this technique is having your tests run in a "dirty" state that mimics production environments.

You can take it a step further and break tests in smaller units like so:


func TestMyFunc(t *testing.T) {
	t.Run("t1", func (t *testing.T) {
		t.Parallel()
	})
	t.Run("t2", func (t *testing.T) {
		t.Parallel()
	})
}

This will also help during debugging because you can isolate the test i.e. -run TestMyFunc/t2

You can shave off a few more seconds by running your code and database off a ramdisk. See how I run Go code from ramdisk.

Lastly, if you're not already using a tool to automatically run your code when you save a file, you should start now to save plenty of time during development.

Keywords: golang, go, test, speed, performance, parellel

How to Get File Notification Working on Docker, VirtualBox, VMWare or Vagrant

Have you tried using tools like inotify, nodemon or webpack inside Docker or VirtualBox but just couldn't get it to work? The problem lies with the virtual file system. They simply do not support the file notification API. The only existing solution out there is to use a polling method. One such program is fswatch. But it becomes slow once the number of files become big.

xnotify solves all these problems easily while adding some awesome features for common use case. It works by sending file events from your host machine to your virtual machine over HTTP. It works on all popular platforms like Windows, Linux and OSX with a single binary. In addition to that, you can easily run commands on file change using the -- option.

Here's a real world example of how I automatically run my Go test and reload the server automatically within VirtualBox:

PowerShell script on my windows host:


.\xnotify.exe --verbose --client :8001 -i . -e "(vendor|\.git)$"

The command above watches all files in the current directory, excluding "vendor" and ".git" folders, and sends the event to localhost:8001, the address where our next program is listening on.

Bash script on my Linux VM to automatically run tests:


./xnotify --verbose --listen "0.0.0.0:8001" --base /my/project/dir --batch 50 -- go test -c app/test -- ./test.test -test.failfast

This will listen on 0.0.0.0:8001 for file events that the Windows program is sending. Then it runs the commands after the -- option, using the working directory of /my/project/dir.

Bash script on my Linux VM to automatically run reload the server:


./xnotify --verbose --trigger --listen "0.0.0.0:8001" --base /my/project/dir --batch 50 -- go build cmd/main.go -- ./main

This works in the same way, except that the commands after -- is used to run the web server instead of the tests.

For more examples, see the README at the project page.

Keywords: file, notification, docker, virtualbox, vmware, vagrant, xnotify

Go: Automatically Run Tests or Reload Web Server on File Changes

Wouldn't it be convenient if you could automatically run your tests or reload your web server whenever you save your file. Here's a very simple way to achieve it. First download xnotify. Then run the following command in your project directory.


./xnotify -i . -- go test -c ./... -- ./test.test

That will automatically run your tests.


./xnotify -i . -- go build main.go -- ./main

That will automatically reload your web server.

What the command does is watch the current directory with -i . then runs the command that comes after --. The equivalent of running go test -c ./... && ./test.test. The -c option tell Go to compile into a binary instead of running the test. We need to do this so that the child process would not stay alive after killing the parent process. Here's a more complex use case:

./xnotify -i . -e "(vendor|\.git)$" --batch 50 -- go test -c project.com/package/name -- ./test.test -test.failfast -test.run TestThisFunction

Some explanation on the code:

-e "(vendor|\.git)$" excludes "vendor" and ".git" directory.

--batch 50 will wait for 50ms before executing so that saving multiple files at once wouldn't restart too many times.

This time we pass a package name instead of the path with project.com/package/name.

-test.failfast will stop the test if any tests fails.

-test.run TestThisFunction will only run the test function called "TestThisFunction".

Keywords: go, automatic, cli, xnotify, test, build, server
Avatar

Contact