TimNew's Lab

Digital Bits in my thought

JavaScript Prototype Chain Mutator

| Comments

In JavaScript world, JSON serialization is widely used. When fetching data from server via Ajax, the data is usually represented in JSON; or loading configuration/data from file in Node.js application, the configuration/data is usually in JSON format.

JSON serialization is powerful and convenient, but there is limitation. For security and other reason, behavior and type information are forbidden in JSON. Functions members are removed when stringify a JavaScript object, also functions are not allowed in JSON.

Comparing Yaml to Ruby, this limitation isn’t that convenient when writing JavaScript application. For example, to consume the JSON data fetched via ajax from server, I really wish I can invoke some method on the deserialized model.

Here is simple example:

Ideal World
1
2
3
4
5
6
7
8
9
10
11
class Rect
  constructor: (width, height) ->
    @width = width if width?
    @height = height if height?

  area: ->
    @width * @height

$.get '/rect/latest', (rectJSON) ->
  rect = JSON.parse(rectJSON)
  console.log rect.area() # This code doesn't work because there is rect is a plain object

The code doesn’t work, because rect in a plain object, which doesn’t contains any behavior. Someone called the rect DTO, Data Transfer Object, or POJO, Plain Old Java Object, a concept borrowed from Java world. Here we call it DTO.

To add behaviors to DTO, there are variant approaches. Such as create a behavior wrapper around the DTO, or create a new model with behavior and copy all the data from DTO to model. These practices are borrowed from Java world, or traditional Object Oriented world.

In fact, in JavaScript, there could be a better and smarter way to achieve that: Object Mutation, altering object prototype chain on the fly to convert a object into the instance of a specific type. The process is really similar to biologic genetic mutation, converting a species into another by altering the gene, so I borrow the term mutation.

With the idea, we can achieve this:

Mutate rect with Mutator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
class Rect
  constructor: (width, height) ->
    @width = width if width?
    @height = height if height?

  area: ->
    @width * @height

$.get '/rect/latest', (rectJSON) ->
  rect = JSON.parse(rectJSON)

  mutate(rect, Rect)

  console.log rect.area()

The key to implement mutate function is to simulate new operator behavior, alerting object.__proto__ and apply constructor to the instance! For more detail, check out the library mutator Bower version NPM version, which is available as both NPM package and bower package.

When implementing the mutator, in IE, again, in the evil IE, the idea doesn’t work. Before IE 11, JavaScript prototype chain for instance is not accessible. There is nothing equivalent to object.__proto__ in IE 10 and prior. The most similar workaround is doing a hard-copy of all the members, but it still fails in type check and some dynamical usage.

Background

object.__proto__ is a Mozilla “private” implementation until EcmaScript 6.
It is interesting that most JavaScript support it except IE.
Luckily, IE 11 introduced some features in EcmaScript 6, object.__proto__ is one of them.

Converting Between HTML 5 Data-attribute Style Hyphen Name and Javascript Camcel-case Name

| Comments

I found a bug in widget.coffee today. To fix the issue, I need the conversion between HTML 5 data-attribute name and javascript function name, e.g. conversion between data-action-handler and actionHandler.

By taking jQuery implementation as reference, I come up 2 utility functions for the conversion:

NameConversion
1
2
3
4
5
6
7
8
9
10
11
12
13
Utils = 
  hyphenToCamelCase: (hyphen) -> # Convert 'action-handler' to 'actionHandler'
    hyphen.replace /-([a-z])/g, (match) ->
      match[1].toUppercase()

  camelCaseToHyphen: (camelCase) -> # Convert 'actionHandler' to 'action-handler'
    camelCase.replace(/[A-Z]/g, '-$1').toLowerCase()

  attributeToCamelCase: (attribute) -> # Convert 'data-action-handler' or 'action-handler' to 'actionHandler'
    Utils.hyphenToCamelCase dataAttribute.replace(/^(data-)?(.*)/, '$2')

  camelCaseToAttribute: (camelCase) -> # Convert 'actionHanlder' to 'data-action-handler'
    'data-' + Utils.camelCaseToHyphen(camelCase)

Here is a more solid implementation based on previous one.

a sloid javascript version
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
var Utils = (function() {
  function hyphenToCamelCase(hyphen) {
    return hyphen.replace(/-([a-z])/g, function(match) {
      return match[1].toUppercase();
    });
  }

  function camelCaseToHyphen(camelCase) {
    return camelCase.replace(/[A-Z]/g, '-$1').toLowerCase();
  }

  function attributeToCamelCase(attribute) {
    return hyphenToCamelCase(dataAttribute.replace(/^(data-)?(.*)/, '$2'));
  }

  function camelCaseToAttribute(camelCase) {
    return 'data-' + camelCaseToHyphen(camelCase);
  }

  return {
    hyphenToCamelCase: hyphenToCamelCase,
    camelCaseToHyphen: camelCaseToHyphen,
    attributeToCamelCase: attributeToCamelCase,
    camelCaseToAttribute: camelCaseToAttribute
  };
})();

Remove Bower From Your Build Script

| Comments

The mysterious broken build

This morning, our QA told us that knockout, a javascript library that we used in our web app is missing on staging environment. Then we checked the package she got from CI server, and the javascript library was indeed not included. But when we tried to generate the package on our local dev box, we found that knockout is included.

It is a big surprise to us, because we share the exact same build scripts and environment between dev-boxes and CI agents and because we manage the front-end dependencies with bower. In our gulp script, we ask bower to install the dependencies every time to make sure they are up to date.

The root cause of the broken build

After spending hours on diagnosing the CI agents, we finally figure out the reason, a tricky story:

When the Knockout maintainer released the v3.1 bower package, they made a mistake in bower.json config file, which packaged the spec folder instead of the dist folder. So this package is actually broken, because the main javascript file dist/knockout.js , described in bower.json doesn’t exist.

Later, the engineers realized they made a mistake, and they fixed the issue by releasing a new package. Maybe they think they haven’t changed any script logic, so they release the new package under the same version number, which is the criminal who broke our builds.

We’re so unlucky that the broken package is downloaded on our CI server when our build script was executed there for the first time. And the broken package is stored in bower cache at that time.

Because of Bower’s cache mechanism, the broken package is used unless the version is bumped or cache is expired. This is the reason why our build is broken on the CI server.

But on our dev box, for some reason, we had run bower cache clean, which invalidated the cache. So we have a good build on our local dev box. This is the reason why we can generate good package on our dev box.

It is a very tricky issue when using bower to manage dependencies. Although it is not completely our fault, but it is kind of the worst case then we can face. The build broke silently, there were no error logs or messages that helped to figure out the reason. (Well, we haven’t got a chance to setup the smoke test for our app yet, so it could be kind of our fault.)

We thought we had been careful enough to clean the bower_components folder every time, but that prevented us from figuring out the real cause.

After fixing this issue, discussed with my pair Rafa and we came up some practices that could be helpful to avoid this kind of issue:

Best practices

  • Avoid bower install or any equivalent step (such as gulp-bower, grunt-bower, etc.) in the build script
  • Check bower_components into the code repository or download the dependencies from our self managed repository for large projects.
  • When dependencies are changed, manually install them and make sure they’re good.

After doing this, our build script runs even faster, because we don’t need to check all dependencies are up-to-date every time. This is a bonus from removing bower install from our build script.

Some thoughts on the package system

Bower components are maintained by the community, and there is no strict quality control to ensure the package is bug-free or being released in an appropriate way. So it could be safer if we can check them manually, and lock them down across environments.

This could be common issue for all kind of community managed package system. Not just Bower, it could be Maven, Ruby Gem, Node.js package, Python pip package, nuget package or even Docker containers!

The Battle Between Game Developers and Game Hackers - Part.1 How HiddenInt Works

| Comments

Harvest: Massive Encounter is a very unique strategic tower defense game published by Oxeye Game Studio. The game is amazing, but I won’t focus on that. I will discuss something interesting I discovered when hacking the game.

By hacking the game, I want to lock down the the number of Mineral that I have in the game. Mineral is the only key resource in the game, which is used to build or upgrade structures. Theoretically locking down a value is easy. Scan the memory for specific number for a few times to filter out the list of potential memory addresses. Then try them one by one. And finally figure out the proper address, then locked it down with the game hacking tool. Very standard approach, and supported by most of the game hacking tool.

But in Harvest, the story is quite different. By searching the mineral value, we can locate a specific address. But we can easily find that the value is only used for display instead of real game state data. In fact, Harvest uses a quite unique approach to protect its game status data! Oxeye guys call it the Hidden Int.

Here is the psudo-code explains how the it works:

HiddenInt Psudo implementation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
class HiddenInt {

  private int mask;
  private int hashedValue;
  private Random maskGenerator;

  public HiddenInt(int initialValue) {
    maskGenerator = new Random();
    setValue(initialValue);
  }

  public void setValue(int value) {
    this.mask = maskGenerator.next();
    this.hashedValue = value ^ this.mask; // ^ is xor operator.
  }

  public int getValue() {
    int value = this.hashedValue ^ this.mask;
    setValue(value); // Generate a new mask and hashed value every time when the value is read.
    return value;
  }

  public int modifyValue(int difference) {
    int value = getValue();
    value += difference;
    setValue(value);
    return value;
  }

  public ~HiddenInt() { // Destructor
    delete maskGenerator;
  }
}

So from the code, you can see, the plain value is never got stored in memory. Instead, it stored a “hashed” value, which is the plain value xor a random generated mask. And every time, when either the value being read or written, the mask changes. This kills the almost all kind of memory scanning features in all kind of game hacking tool! You cannot find the plain value in the memory, so you have to use “fuzzy scan”, which detects the values changes instead of scanning specific value. Again, the data in memory keeps changing even when the value doesn’t (I’ll explain why it happens later), so “fuzzy scan” doesn’t work either here!! That’s a master kill!

Besides of keeping mask changing, it also keeps reading the value out and writing it back, which kills most game editing tool “value frozen” feature! Unless you can ensure you freeze the mask and hashed value at the same time, or it breaks the value! Since the reading and writing happen in a very high frequency (Again I’ll explain why it happens later), so you have to lock down the value at a specific point, or the locked value will be overwritten immediately.

Furthermore, since this value is the key game value, which is displayed on Game UI all the time, the getValue() is called every time when game UI renders! And usually game UI renders in at least 60fps. So the value is read 60 times per second, and the mask changes 60 times per second! (This is why the value keep changing in a high frequency!)

So this is the almost invincible way to keep key game state data safe from common game hacking tools!

In the next post, I’ll explain how to find out a bypass to this security mechanism!

Process.nextTick Implementation in Browser

| Comments

Recursion is a common trick that is often used in JavaScript programming. So infinite recursion will cause stack overflow errors. Some languages resolves this issue by introduce automatically tail call optimization, but in JavaScript we need to take care it on our own.

To solve the issue, Node.js has the utility functions nextTick to ensure specific code is invoked after the current function returned. In Browser there is no standard approach to solve this issue, so workarounds are needed.

Thanks to Roman Shtylman(@defunctzombie), who created the node-process for Browserify, which simulate the Node.js API in browser environment. Here is his implementation:

node-process

Infinite Recursion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
process.nextTick = (function () {
    var canSetImmediate = typeof window !== 'undefined'
    && window.setImmediate;
    var canPost = typeof window !== 'undefined'
    && window.postMessage && window.addEventListener;

    if (canSetImmediate) {
        return function (f) { return window.setImmediate(f) };
    }

    if (canPost) {
        var queue = [];
        window.addEventListener('message', function (ev) {
            var source = ev.source;
            if ((source === window || source === null) && ev.data === 'process-tick') {
                ev.stopPropagation();
                if (queue.length > 0) {
                    var fn = queue.shift();
                    fn();
                }
            }
        }, true);

        return function nextTick(fn) {
            queue.push(fn);
            window.postMessage('process-tick', '*');
        };
    }

    return function nextTick(fn) {
        setTimeout(fn, 0);
    };
})();

Here is some comments on the implementation.

setTimeout

To simulate the nextTick behavior, setTimeout(fn, 0) is a well-known and easy to adopt approach. The issue of this method is that setTimeout function does heavy operations, call it in loop causes significant performance issue. So we should try to use cheaper approach when possible.

setImmidate

There is a function called setImmediate, which behaves quite similar to nextTick but with a few differences when dealing with IO stuff. But in browser environment, there is no IO issue, so we can definitely replace the nextTick with it.

Immediates are queued in the order created, and are popped off the queue once per loop iteration. This is different from process.nextTick which will execute process.maxTickDepth queued callbacks per iteration. setImmediate will yield to the event loop after firing a queued callback to make sure I/O is not being starved. While order is preserved for execution, other I/O events may fire between any two scheduled immediate callbacks.

setImmediate(callback, [arg], […]) Node.js

The setImmediate function is perfect replacement for nextTick, but it is not supported by all the browsers. Only IE 10 and Node.js 0.10.+ supports it. Chrome, Firefox, Opera and all mobile browsers don’t.

Note: This method is not expected to become standard, and is only implemented by recent builds of Internet Explorer and Node.js 0.10+. It meets resistance both from Gecko (Firefox) and Webkit (Google/Apple).

window.setImmediate MDN

window.postMessage

window.postMessage enable developer to access message queue in the browser. By adding some additional code, we can simulate nextTick behavior based on message queue. It works in most modern browser, except IE 8. In IE 8, the API is implemented in a synchronous way, which introduce an extra level of stack-push, so it cannot be used to simulate nextTick.

Overall, there is no perfect workaround to the nextTick issue for now. All the solutions have different limitations, we can only hope that this issue can be resolved in the future ECMAScript standard.

Android Studio 0.6.1 SDK Recognition Issue When Using Android SDK 19 and Gradle

| Comments

A few days ago I upgraded my Android Studio to version 0.6.1. And migrated my android project build system from Maven to Gradle. Then nightmare happened!

Android Studio Version

It looks there are some issue with Android Studio version 0.6.1, which cannot recognize the jar files in Android SDK 19 (4.4 Kit Kat). As a consequence that all the Android fundemantal classes are not recognized properly, which makes IDEA almost impossible to be used.

Classes Not Recognized

After spending days on googling and trying, I realize the issue is caused that Android Studio doesn’t recognize the sdk 19 content properly.

Here is the content of Android SDK 19 that Android Studio 0.6.1 identified:

SDK in Android Studio

As comparison, here is a list of proper content of Andrdoid SDK 19 with Google API:

SDK in IDEA

Here is a list of proper content of Andrdoid SDK 19 retrived from Maven Repository:

Maven SDK

In the list, you can easily figure out that the android.jar file is missing! It is the reason why the classes are not properly recognized! Even more if you compare the list against the JDK 1.6, you will find that most of the content are the same.

JDK

Ideally, to fix this issue should be quite easy. Android Studio provides a Project Settings dialog allow developer to adjust the SDK configurations.

Project Settings Dialog: Project Settings

But for Gradle projects, Android Studio displays a greately simplified project settings dialog instead of the original one, which doesn’t allow developer to config the SDK in dialog any longer.

Gradle Project Settings Dialog: Project Settings

Still now, I figured out several potentisal workarounds to this issue, hope these helps:

  1. Downgrade the SDK version from 19 to 18 fixes the issue.
    If you not really needs SDK 19 features, try to downgrade the SDK version to 18 to fix the issue.
  2. Use IntelliJ instead of Android Studio
    I encounters a different issue when using IDEA, it fails to sync the Gradle file.
  3. Use Maven or ANT instead of Gradle
    Gradle is powerful, but there are too many environment issues when using with IDEs… Maven is relatively more stable.

I haven’t figure out a perfect solution to this issue, just hope the Google can fix the issue as soon as possible.

Is Android API Document on ConsumerIrManager Lying?

| Comments

Just found a shocking fact that Android API document on ConsumerIrManger.transmit method is wrong!

KitKat has realised its own Infrared blaster API, which is incompatible with legacy Samsung private API. So I was working on Android Infrared Library to make it adapted automatically on both Samsung private API and Kit Katofficial API.

After I finished the coding according the document, I found the app broke on my Galaxy Note 3 with Kit Kat. It works perfect when running on Jelly Bean.

And I figured out an issue that it takes longer time to transmit the same seqeunce when I upgraded API. ( When IR blaster is working, the LED indicator on the phone turns blue. And I found the time of indicator turning blue is significant longer than before. ) And my IRRecorder cannot recognize the sequence sent by my phone any longer.

After spent several hours, I figured out the reason. The pattern was encoded in a wrong way. But I’m pretty sure that I strictly followed the API document.

So I get a conculusion that the ConsumerIrManager implementation on Samsung Note 3 is different to what described in Android API document. However I’m not sure the reason is that the Android document is lying or Samsung implemented the driver in a wrong way.

Here is the technical details of the issue and its solution:

IR Command is trasmitted by turnning the IR blaster LED on and off for a certain period of time. So each IR command can be represented by a series of time periods, which indicates how long the led is on or off. The difference between Samsung API and Kit Kat APi is that how the time is mesured.

carrierFrequency The IR carrier frequency in Hertz.
pattern The alternating on/off pattern in microseconds to transmit.

According to the Android Developer Refernece), the time in KitKat is measured in the unit of microseconds.

But for Samsung, the time is mesured by the number of cycles. Take NEC encoding as example, the frequency is 38kHz. So the cycle time T ~= 26us. BIT_MARK is 21 cycles, the period of time is around 26us x 21 ~= 546us.

So ideally, regardless of lead-in and lead-out sequence, to send the code 0xA in NEC encoding, Samsung API needs 21 60 21 21 21 60 21 21; and Kit Kat API needs 560 1600 560 560 560 1600 560 560.

But accroding to my experience, the Android Developer Reference is wrong. Even in KitKat, the time sequence is also measure by number of cycles instead of the number of microseconds!

So to fix the issue, you need some mathmatical work. Here is the conversion formula:

1
2
3
4
5
6
n = t / T = t * f / 1000

  n: the number of cycles
  t: the time in microseconds
  T: the cycle time in microseconds
  f: the transmitting frequency in Hertz

Create Shortcut for Your Project Mingle With Alfred

| Comments

A few days ago, I post a blog(Create shortcut for your project Mingle with Chrome OmniBox) about that we can access specific Mingle story card directly from Chrome “OmniBox”. The shortcut is created by registering Mingle search as custom search in Chrome.

Today I found applying the same trick to launch apps, we can have one step further. In Mac OS, almost all the launcher apps support custom search, such as Aflred, QuickSilver, Launcher. Even in Windows, we also have Launchy. For Linux, I believe there should be similar stuff. So the trick is environment independent.

To add custom search query in different launch app is quite different, but should be straightforward.
I’ll take Alfred as example:

  1. Open Alfred Preference Alfred

  2. Register Mingle as custom search Alfred

The url for custom search can be get with the same approach described in previous post. Alfred uses {query} as placeholder, so you should replace the %s with {query} when coping the url from chrome to Alfred.

Use Jade as Client-side Template Engine

| Comments

Jade is a powerful JavaScript HTML template engine, which is concise and powerful. Due to its awesome syntax and powerful features, it almost become the default template engine for node.js web servers.

Jade is well known as a server-side HTML template, but actually it can also be used as a client-side template engine, which is barely known by people! To have a better understanding of this issue, firstly we should how Jade engine works.

When we’re translating a jade file into HTML, Jade engine actually does 2 separate tasks: Compiling and Rendering.

Compiling

Compiling is almost a transparent process when rendering jade files directly into HTML, including, rendering a jade file with jade cli tool. But it is actually the most important step while translating the jade template to HTML. Compiling will be translate the jade file into a JavaScript function. During the process, all the static content has been translated.

Here is simple example:

Jade template
1
2
3
4
5
6
7
8
9
10
11
12
13
doctype html
html(lang="en")
  head
    title Title
  body
    h1 Jade - node template engine
    #container.col
      p You are amazing
      p.
        Jade is a terse and simple
        templating language with a
        strong focus on performance
        and powerful features.
Compiled template
1
2
3
4
5
6
function template(locals) {
    var buf = [];
    var jade_mixins = {};
    buf.push('<!DOCTYPE html><html lang="en"><head><title>Title  </title></head><body><h1>Jade - node template engine</h1><div id="container" class="col">     <p>You are amazing</p><p>Jade is a terse and simple\ntemplating language with a\nstrong focus on performance\nand powerful features.</p></div></body></html>');
    return buf.join("");
}

As you can see, the template is translated into a JavaScript function, which contains all the HTML data. In this case, since we didn’t introduce any interpolation, so the HTML content has been fully generated.

The case will become more complicated when interpolation, each, if statement is introduced.

Jade template with interpolation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
doctype html
html(lang="en")
  head
    title =title
  body
    h1 Jade - node template engine
    #container.col
    ul
      each item in items
        li= item

    if usingJade
      p You are amazing
    else
      p Get it!

    p.
      Jade is a terse and simple
      templating language with a
      strong focus on performance
      and powerful features.
Compiled template with interpolation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
function template(locals) {
    var buf = [];
    var jade_mixins = {};
    var locals_ = locals || {}, items = locals_.items, usingJade = locals_.usingJade;
    buf.push('<!DOCTYPE html><html lang="en"><head><title>=title  </title></head><body><h1>Jade - node template engine</h1><div id="container" class="col"></div><ul>');
    (function() {
        var $$obj = items;
        if ("number" == typeof $$obj.length) {
            for (var $index = 0, $$l = $$obj.length; $index < $$l; $index++) {
                var item = $$obj[$index];
                buf.push("<li>" + jade.escape(null == (jade.interp = item) ? "" : jade.interp) + "</li>");
            }
        } else {
            var $$l = 0;
            for (var $index in $$obj) {
                $$l++;
                var item = $$obj[$index];
                buf.push("<li>" + jade.escape(null == (jade.interp = item) ? "" : jade.interp) + "</li>");
            }
        }
    }).call(this);
    buf.push("</ul>");
    if (usingJade) {
        buf.push("<p>You are amazing</p>");
    } else {
        buf.push("<p>Get it!</p>");
    }
    buf.push("<p>Jade is a terse and simple\ntemplating language with a\nstrong focus on performance\nand powerful features.</p></body></html>");
    return buf.join("");
}
Data for interpolation
1
2
3
4
5
6
7
8
9
{
  "title": "Jade Demo",
  "usingJade": true,
  "items":[
    "item1",
    "item2",
    "item3"
  ]
}
Output Html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html>
<html lang="en">
  <head>
    <title>=title  </title>
  </head>
  <body>
    <h1>Jade - node template engine</h1>
    <div id="container" class="col"></div>
    <ul>
      <li>item1</li>
      <li>item2</li>
      <li>item3</li>
    </ul>
    <p>You are amazing</p>
    <p>
      Jade is a terse and simple
      templating language with a
      strong focus on performance
      and powerful features.
    </p>
  </body>
</html>

Well, as you can see, the function has become quite complicated than before. It could become more complicated when extend, include or mixin introduced, you can trial it on your own.

Rendering

After the compiling, the rendering process is quite simple. Just invoking the compiled function, the return string is rendered html. The only thing need to mentioned here is the interpolation data should be passed to the template function as locals.

Using Jade as front-end template engine

Still now, you probably have got my idea. To use jade a front-end template, we can compose the template in jade. Later compile it into JavaScript file. And then we can invoke the JavaScript function in front-end to achieve dynamic client-side rendering!

Since Jade template has been precompiled at server side, so there is very little runtime effort when rendering the template at client-side. So it is a cheaper solution when you have lots of templates.

To compile the jade files into JavaScript instead of HTML, you need to pass -c or --client option to jade cli tool. Or calling jade.compile instead of jade.render while using JavaScript API.

Configure Grunt

Well, since Grunt is popular in node.js world. So we can also use Grunt to do the stuff for us.
Basically, use grunt for jade is straightforward. But it is a little bit tricky when you want to compile the back-end template into HTML as well as to compile the front-end template into JavaScripts.

I used a little trick to solve the issue. I follow the convention in Rails, that prefix the front-end template files with underscore. So

1
2
3
/layouts/default.jade       -> Layout file, extended by back-end/front-end templates, should not be compiled.
/views/settings/index.jade  -> Back-end template, should be compiled into HTML
/views/settings/_item.jade  -> Front-end template, should be compiled into JavaScript
Gruntfile.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
module.exports = (grunt) ->
  grunt.initConfig
    pkg: grunt.file.readJSON('package.json')

    jade:
      options:
        pretty: true

      compile:
        expand: true
        cwd: 'views'
        src: ['**/*.jade', '!**/_*.jade']
        dest: 'build/'
        ext: '.html'

      template:
        options:
          client: true
          namespace: 'Templates'

        expand: true
        cwd: 'views'
        src: ['**/_*.jade']
        dest: 'build/'
        ext: '.js'

  grunt.loadNpmTasks('grunt-contrib-jade')

I distinguish the layouts and templates by file path. And distinguish the front-end/back-end templates by prefix. The filter !**/_*.jade excludes the front-end templates when compiling the back-end templates.

This approach should work fine in most cases, but if you are facing more complicated situation, and can’t be handled with this trick, try defining your own convention, and recognizing it with custom filter function to categorize them.

Upgrading DSL From CoffeeScript to JSON: Part.1. Migrator

| Comments

I’m working on the Harvester-AdKiller version 2 recently. Version 2 dropped the idea “Code as Configuration”, because the nature of Chrome Extension. Recompiling and reloading the extension every time when configuration changed is the pain in the ass for me as an user.

For security reason, Chrome Extension disabled all the Javascript runtime evaluation features, such as eval or new Function('code'). So that it become almost impossible to edit code as data, and later applied it on the fly.

Thanks to the version 1, the feature and DSL has almost fully settled, little updates needed in the near future. So I can use a less flexible language as the DSL instead of CoffeeScript.

Finally I decided to replace CoffeeScript to JSON, which can be easily edited and applied on the fly.

After introducing JSON DSL, to enable DSL upgrading in the future, an migration system become important and urgent. (Actually, this prediction is so solid. I have changed the JSON schema once today.) So I come up a new migration system:

Upgrader
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class Upgrader
  constructor: ->
    @execute()

  execute: =>
    console.log "[Upgrader] Current Version: #{Configuration.version}"
    migrationName = "#{Configuration.version}"
    migration = this[migrationName]

    unless migration?
      console.log '[Upgrader] Latest version, no migration needed.'
      return

    console.log "[Upgrader] Migration needed..."
    migration.call(this, @execute)

  'undefined': (done) ->
    console.log "[Upgrader] Load data from seed..."
    Configuration.resetDataFromSeed(done)

  '1.0': (done) ->
    console.log "[Upgrader] Migrating configuration schema from 1.0 to 1.1..."
    # Do the migration logic here
    done()

The Upgrader will be instantiate when extension started, after Configuration is initialized, which holds the DSL data for runtime usage.

When the execute method is invoked, it check the current version, and check is there a upgrading method match to this version. If yes, it triggers the migration; otherwise it succeed the migration. Each time a migration process is completed, it re-trigger execute method for another round of check.

Adding migration for a specific version of schema is quite simple. Just as method 1.0 does, declaring a method with the version number in the Upgrader.

'undefined' method is a special migration method, which is invoked there is no previous configuration found. So I initialize the configuration from seed data json file, which is generated from the version 1 DSL.

The seed data generation is also an interesting topic. Please refer to next post(Redfine DSL behavior) of this series for details.