Dataset in React

When life gives you lemons, you make lemonade. And when it doesn’t, hack your way around it. Such was the case with a piece of React code where the previous programmer passed some data-* attributes as props, then onto the JSX markup and later extract them from in the event handlers (or also from the event.currentTarget or from native event). Now, React isn’t ready enough for this (as of January, 2017). The dataset property which are generally accessible via the HTMLElement.dataset property aren’t handled the DOM way.

While playing around wrappers, I had to find a workaround to pass the data-* props. I wrote a little hack to extract the data-* attributes using regex and some spread syntax to pass the dataset to both the outer wrapper, and the inner component.


render() {
  // Extract the keys present in the `props`
  // Filter the keys which have `data-` as prefix
  // and insert them into `dataset` object
  const dataset = {};
  const { state, props } = this;
  Object.keys(props).filter (value => {
    if (/data-\S+/gi.test(value)) {
       dataset[value] = props[value];
  return (
    <Wrapper {...dataset} onChange={this.handleChange} >
      <Component {...dataset} name="component-type-1">

The advantage – you can access the dataset in your handleChange method from the event.currentTarget and Visit this MDN link to know more about dataset.


Creating a Guitar Tuner – With modern web APIs

String Frequency Scientific pitch notation
1 (E) 329.63 Hz E4
2 (B) 246.94 Hz B3
3 (G) 196.00 Hz G3
4 (D) 146.83 Hz D3
5 (A) 110.00 Hz A2
6 (E) 82.41 Hz E2

This table can be obtained from the Guitar Tuning wiki page.

So, this is how it goes:

  1. Our objective here is to generate the frequencies when requested, like when a user presses a button
  2. We make use of the Web Audio API to generate the frequencies listed above
    • An extension to it, will be using the micro phone to match the frequencies, but that won’t be covered in this part
    • Paul Lewis has an excellent app built with the above approach
  3. To use the web api, we must create an instance of the AudioContext object
    • Akin to canvas, we must instantiate an audio context object before accessing the web audio api
    • And, to generate the frequencies, we have to create an oscillator.

// create web audio api context
var audioCtx = new (window.AudioContext || window.WebkitAudioContext);

// create Oscillator node
var oscillator = audioCtx.createOscillator();

Now, we’ve to specify the type of wave. These are four natively supported types:

  • sine
  • square
  • sawtooth
  • triangle

A custom type is also available for use. We are not getting into that.

  • We’ll use the sine wave, because that audio wave is bearable.
  • We’ve to set a frequency value at which the oscillator will produce the waves. Let’s set it to E1, i.e. 329.63 Hz
  • We’ve to connect to the destination supported by the Audio Context. The output generally the standard audio interface i.e. your speakers.
  • Next, we start the oscillator.
  • Remember, the oscillator can be started once, and only once. It can be stopped, but can’t be restarted.
  • If you make live changes to the frequency or the type of the wave, the changes are reflected in the oscillator realtime. Hence, the absence of a restart functionality won’t be felt much.

Let’s create an oscillator React component (sorry, p-react, because size matters).

Now, the markup in the snippet ahead appeared gibberish. Therefore, I’ve posted a gist instead.

# oscillator.js
import { h, Component } from 'preact';
import style from './style';

const audioContext = new (window.AudioContext || window.webkitAudioContext);

export default class Oscillator extends Component {

  play() {
    this.oscillator = audioContext.createOscillator();
    this.oscillator.type = this.props.type || 'sine';
    this.oscillator.frequency.value = this.props.frequency || 329.63; //E(1) is default

  stop() {
    this.oscillator = null;

  render() {
    return ( /* refer the gist */ );


The reasons for creating a new instance every time you hit the start button are

  • The start method only works once per oscillator. Hence, once stopped, there is no way to restart the oscillator.
  • There is no API to suspend and later resume the oscillator.
  • The context can be suspended and resumed later but that doesn’t stop the oscillators. And, when you resume the context after firing multiple oscillators, you hear all of them buzz simultaneously.
  • Therefore, we must create a new oscillator instance then start, every time we hit start and stop-then-destroy the instance every time we hit the stop button.

Now, we’ve to make some buzzing & humming by assigning values to the props. If you can’t see the code here, then follow this gist.

  <Oscillator note="E1" frequency="329.63" type="sine" />
  <Oscillator note="B2" frequency="246.94" type="sine" />
  <Oscillator note="G3" frequency="196.00" type="sine" />
  <Oscillator note="D4" frequency="146.83" type="sine" />
  <Oscillator note="A5" frequency="110.00" type="sine" />
  <Oscillator note="E6" frequency="82.41" type="sine" />

And, we're done.


Make sure to lower your speaker volumes. If you’re using head phones, then definitely cross check 3 times if your volume is low or not. I don’t want people testing it go all Beethoven on the first day.

Hit the start/stop buttons and tune your guitar along.


In the similar fashion, we can create a chord component (possibly in the next tutorial) that creates 3 oscillators and plays all of them simultaneously to create a resonating chord.

Hint: A frequency combination for C-major are 196.00 (G), 261.63(C) and 329.63(E). And, for creating a G-major, you can use a combination of 146.83(D), 196.00(G),  and 246.94(B).

happy humming



How the memory usage shoots up in a VPS when you create virtual hosts


The server was running on port 80 and serving a single domain on the VPS. I `a2ensite`-ed  2 domains  running on different ports. Mind you, the server was rendering nothing beyond static html files containing `hello world` messages.

You can see the server running calmly at 2.5MBs and after creating those domains, the memory rises beyond 12.5MBs.

That’s nearly 6 times.

In the coming weeks I’ll hunt for some data around this to present the full case: memory usage with virtual hosts.

Related articles:


DPI specific image loading ( Not srcset )

HTML5 provides you with srcset attribute on img tags to load resolution-, dimension- or dpi- specific images. But techniques for loading higher DPI images existed since the early days of HTML5 + CSS3 announcement.

Mobile web app developers, especially the ones developing for iOS safari, have been using -webkit-device-pixel-ratio in media queries to load up normal images for non-retina and images of higher dimension for retina display, squeezed into the same space by adjusting background-size property.

The same can be achieved for loading images on a standard page

  1. Set the image src to a transparent 1×1 gif , Data URIs preferred (albeit slow, but we’re talking about a workaround remember?)
  2. Set the dimension of the images via css (also attributes, because performance)
  3. Write/Generate some CSS
  4. Set the background image for the img tag

The only barrier in this approach – dynamic images: Ideally people don’t change the master CSS file for changes in banner images.

The easiest workaround for this problem: Dynamically injected inline CSS straight into the markup that contains the image URLs for the banner images.

webpack sucks, at least for now

Javascript is an interpreted language and by that I understand that when I hit the F5, I expect zero latency for the new script to appear in local dev environment.

When you take that away by introducing obnoxious compilers like webpack – that first needs to be told how to load, then it compiles, then concats before showing me the output, you’re already the subject of my fury.

The ‘wait time’ for compilation sucks

The reasons why I stayed all these years away from coffee script –

  1. I know how to write ‘good’ javascript and
  2. The compilation time – it sucks.

Let JS remain the interpreted language we all know and love. Don’t put your compiled language genius into it. You’re not welcome here – to the interpreted world.

Debugging is horror

you’ve to trace that line out. Imagine a project consisting of 100 small files with almost similar looking content that you concatenated and now clueless where exactly to hunt and debug.

It’s not uncommon – when you write derived components inherited from parent components, the siblings tend to look similar.

webpack – you’re a clutter builder and clarity killer. And, No. I’m not going to work in large files a.k.a monoliths just to support your existence.

Lack of build blocks

I came from an AngularJS Development environment, where I extensively used yeoman and that allows me to work on an index.html file locally which has references to locally kept css and JS.

That means we don’t have to wait for concat or compile before we hit the F5 or ctrl+r.

Plus, the library files from bower_components stay separate. In webpack, unfortunately, they become part of the compilation step.

Luckily, we’ve wiredep and usemin blocks for our rescue – which simplifies local development and gives great support for production builds.

Learn something from it. Your hotness may look tempting to fools and noobs. I ain’t one. Grow up.

Till then – happy hating.


And, I always believe there is no point in complaining, one must find a remedy. Following are some workarounds to reduce frustration:

The little catch(es) with Arrow Functions inside Accessors and Methods

Arrow functions are shorthand notation for function expressions. Although the catch is with the binding of this keyword in the context of accessors

Follow the code snippet below:

'use strict';
var obj = {
  a: 10

//Snippet A:
Object.defineProperty(obj, "b", {
  get: () => {
    console.log(this.a, typeof this.a, this);
    return this.a+10; // represents global object 'Window'

//Snippet B:
Object.defineProperty(obj, "b", {
  get: function() {
    console.log(this.a, typeof this.a, this);
    return this.a+10; // represents current object 'obj'

Though the snippets A & B may appear to be working alike, the only catch is with this binding, in the snippet A, where the arrow functions doesn’t bind the this keyword as expected.

I’m a little puzzled, not sure if it’s a bug or a feature, because MDN mentions:

An arrow function does not create it’s own this context, rather it captures the this value of the enclosing context


The binding of lexical this takes place differently in case of Arrow functions.

Examples below:

'use strict';
var obj = {
  i: 10,
  b: () => console.log(this.i, this),
  c: function() {
    console.log( this.i, this)
obj.b(); // prints undefined, Window
obj.c(); // prints 10, Object {...}

Common Assumption: this.i should behave like any other function inside. NO.


Another example involving call (yanked from MDN):

var obj = { base: 1 };
var f = v => console.log(v + this.base, this);
var g = function ( v ) { console.log(v + this.base, this); };, 2); // logs NaN, Windows, 2); // logs 3, Object { base... }  


The anomaly in the aforementioned snippet ‘A’ is an ‘expected behavior in ES6’ albeit not anticipated in ES5.

Therefore, it can be safely concluded that arrow functions  can well be used for functions, but they are not ideal candidates for Methods. And, as MDN quotes:

Arrow function expression are best suited for non-method function


Bonus: Here is a little mindfuck to play around. Try to guess the output.
(() => ({ foo: () => ({ foo: () => ({ foo: () => ({ foo: () => ({ }) }) }) }) }))();

What? It was easy? Nice. Try the next
(() => () => () => () => () => ({}) )()()()()();

Pacman much?
(() => () => () => () => () => {})()()()()();

Update: Updated the same code as examples in MDN

Web Scraper Blocker 1

For the uninitiated, web scrapers are applications that can connect to the internet and download a web page, just like a human user downloading it to their browser. ‘Price scrapers’ is the name given to scrapers that connect to eCommerce websites to download the page, go through the textual content and determine the price of the product.

For all site owners, scrapers are pain in the ass.

Search engines also use scrapers a.k.a crawlers, and they use them to download the pages on your website. No doubt, they’re useful. But, crawling is a kind of activity that anybody won’t perform every 20 minutes. So let’s isolate crawlers from scrapers and we’ll see that:

  • Scrapers create unwanted, unsolicited, unwelcome traffic
  • They’re barely useful to the site owners.
  • They consume resources like RAM and CPU by making multiple requests
  • They steal information. Yes, it’s stealing because of the scale at which it happens

For the above few reasons, site owners often time block such scrapers by following few techniques:

  • They determine the source (aka client’s) IP
  • Check the frequency at which they are accessing this site
  • If the frequency goes beyond a threshold level (say 25 requests per minute or so), they block the IP and redirect them to some other page


http2, https and more

Before we begin anything, let these information sink in. Watch the following video:


Long back when I was serving an eCommerce major, our team evaluated HTTP2 over http and it was concluded that the gains were minimal. TBH, I felt our premise was wrong, our approach for evaluating http2 was unduly done (read ahead to know). It was later concluded that we were not going for http2 because it had low ROI.

Right now, in my current org., I tried to push through http2 along with https (https because of the reasons mentioned ahead). My proposal wasn’t accepted, again because of Low ROI.

There are several costs associated with going with http2 + https viz

  • Investing on procuring an SSL certificate
  • Evaluating nginx 1.9.5
  • Reading through the documentation and setting up the nginx.conf
  • Troubleshooting on staging / prod environments
  • Change in build script to optimize the outputs for http2 protocol
  • Despite being least of our concern, lack of support in legacy browsers is kind of inhibiting if your priority is to get everyone onboard.
  • Another small concern is over 3rd party tools – all web apps use premium (not free) 3rd party tools to study user behavior. Checking their compatibility with https is important again. (Small concern – because most 3rd party tools realize this & they serve their contents from https. But there could be smaller players who do not have this capability).
  • Change of origin – A change in protocol i.e. from http to https will change the port from :80 to :443. This alters the URI schema. Hence, it’d also imply that the origins have changed. Although, I’ve not validated this or the areas of impact, but it impacts the current SEO or anything else, it’ll be a bigger concern to us than anything else
  • Non-secure content – We load our static assets from CDNs and thankfully, Amazon cloudfront supports both http and https. But, if any of our providers failed to provide us with an https endpoint, we’ll be hopeless

Reason for going with https:

  1. The idiosyncrasies associated with proxy servers and anti virus software to sniff unencrypted http1.1 content. And, if they spot any anomaly in headers e.g. the http version, they’ll simply flag the content as malicious
  2. Google Chrome is anyway going to shame non-https websites
  3. https has elevated priority in SEO ranking over non-https. At least, Google obeys this and as a JS-dynamic-template-heavy website, my sole hope for SEO is Google’s Page Rank algorithm alone.

What went wrong with our previous http2 evaluation

http2 is not just the version digit incremented. The transition of the version no. indicates that the new version is a total paradigm shift from the earlier version. http2 protocol works better on small splitted files – hence, our age-old practice of concat-minify-obfuscate-revv won’t work.

Key takeaway 1:

To get the best out of http2, you need many small files minified-obfuscated-revved, not concatenated into single file.

Check these link to get a better idea on the goodness of many small files:

Bonus tip: For a cherry on top the cake, you can further use AMD to load modules whenever needed.

Our last evaluations were based on testing speeds with single-large files. Hence, the gains looked minimal. HTTP2 wasn’t designed perform better with large files.

Key takeaway 2:

Domain sharding is no longer a requirement.

To parallelize static asset loading, we heavily depended on domain sharding i.e. splitting resource requests across multiple domains thereby opening multiple TCP connections.

http2 doesn’t require that. Multiple static resources should be requested over one and only one TCP connection. Unfortunately, this was not how we evaluated.

Key takeaway 3:

Encrypted connection i.e. https is not slow. Google’s SPDY protocol, which could be enabled by just enabling another flag, was the best way for loading resources https, until http2 came in.

It had to be good enough for Google to declare its annihilation/ further usage & support.

What to do next to convince your team to go for http2 + https

Every decision in an organization should be based on facts, based on data a.k.a. Data driven decisions. Decisions can’t be made on the basis of popular remarks/opinions. So

  • Gather data about https adoption across industry
    • gather all benchmarking studies and results
    • gather its success stories
    • perform your perf tests on your existing system and gather benchmarking data
    • analyze performance data from http2 & utilize this data to show comparisons
      • e.g. if your new server gives a time boost of even a thousand milisconds, that’s a major save
    • PS: performance tests can be baffling and overwhelming. One feels as if they’re part of some Formular 1 team doing performance improvements
  • Clearly explain the need of encryption and how encryption leads to greater trust and security
  • Make everyone understand that SSL certificates are no longer hard-to-obtain
    • Companies like startssl can offer you a free ssl certificate to get started with
    • Additionally, your bash console comes powered with openssl tools. You can leverage it to create a self-signed certificate for your dev environments
  • Start your POC
    • fork your repo, create an experimental branch
    • Perform benchmarking tests
    • and, do an A/B test
    • Check the conversion rates on each system

I’m using Node.JS / Apache. How can I go for http2?

At the time of writing this article, I’ve n’t explored about Node.JS support for http2. There could be libraries to help you out with this. Or may be, Node inherently support http2 out of the box. I do not know yet. (will update this post when I figure it out). Same applies to Apache as well.

However, nginx 1.9.5 has http2 enabled. Therefore, you can always put an nginx proxy in front of your current server – be it node.js or apache, or any server.

  • Setup nginx 1.9.5 on your box
  • Specify http2 with ssl along with http
  • Upload your certificates & configure the server correctly
  • Run your nodejs server on a different (system unreserved) port (you can block this port from public access too)
  • Configure the nginx proxy to consume data from nodejs server

This will ensure that nginx (which is well maintained and free and also supports the required http2 + https setup) will take charge of encrypting & http2-fying your site while your nodejs app keeps working the way it has always been.

Grow up bozo!

Everyday I meet programmers who have cleared tough programming rounds. We made them write tough algorithms – starting from inverting a binary tree to dynamic programming. Some of them were so ahead of their contemporary peers that they’d evaluated all of the existing frameworks.

However, to my utter surprise, when it comes to working, their everyday thought process seems to have diverged tangentially since the day we hired them.

We ask you data structures, that too in an interview to understand if you follow the best practices under pressure, so that if a day comes when we run into a production bug prioritised P1, and ETA in few more hours, we expect you to deliver it with as sincerely as you wrote that binary tree on the day of interview.

Screen Shot 2016-01-23 at 3.14.21 pm

(the image above is unrelated to the rant, but shows how deep recursion can go upto)


Reason behind this rant:

One day, at work, I came across a piece of code written by an abler colleague – it was a JSON parser, that constructs a tree view out a deeply nested JSON tree.

  1. When you see a nested structure like that, the immediate structure that’d come to mind will be either a graph or a tree, or at least a linked-list. You don’t recurse dammit.
  2. You intended to write a lot of if-else‘s – If those if-else blocks give you orgasms,  then make sure you narrate the experience in some documentation or even comments, so that a reader gets an idea how shitty your sexual ideas are
  3. No Unit testing
    • Why people have a feeling that they’re doing the universe a favour by writing proper descriptive unit tests?
    • Why your unit tests always look like cliched movie one liners
      • forEach(testCase in AllTestCases) {
          assert( testCase.mockedService( testCase.SampleInputJSONObject ), testCase.expectedOutPutObject ) );
    • Why your commit messages look like snarky remarks on movie trailers
      • "Fixed this because some shit was happening in XYZ module"
      • Really? That some shit was highly insightful, thank you.
    • Why I’m able you judge your upbringing from the manner you practice software engineering?
    • If you write that kind of commit messages, those kinds recursive code, clearly, I will judge you 100 times before writing a line of code


Honestly – your tiny shit ( or call it piece of code) is not worth of that debugging effort. But, regressions – they’re painful, and even more painful when you’ve written a useless unit test.

AngularJS: Scroll into View after $digest

I was using jQuery with AngularJS in one of my projects and there was a requirement to fetch some contents & I had to scroll the content to visible area upon their arrival.

The general approach to fetch content is a promise, and we can update the contents to scope in the success callback.

.then(function( contents ) {
  $scope.contents = contents;
  $scope.loading = false;
  var value = $("#section").offset().top - $("#section").height();
    scrollTop: value
  }, 500);

But the issue arises while determining the offset position. We measure the offset synchronously and by that time, the $digest() hasn’t finished & hence, there is no content in the section.

In such a case, we can either defer the call using a setTimeout or $timeout and observe the $$phase.

But, what could save us from some frustration is actually – a callback function, after $digest() has finished.

Finally, I found a solution to this in git-issue comment-33020323, as mentioned in this SO answer.

And, it worked!!!

.then(function( contents ) {
  $scope.contents = contents;
  $scope.loading = false;
  $scope.$$postDigest( function() {
    var value = $("#section").offset().top - $("#section").height();
      scrollTop: value
    }, 500);