Thursday, November 12, 2015

Continuous Integration tests for your Emacs package

I recently contributed some Emacs Lisp projects to MELPA. The process was pretty simple. I forked MELPA and created a file that pointed to my repo. Then after pushing to my fork, I submitted a pull request.

But I had a few issues where my code was emitting compiler warnings. Every time I pushed code to my emacs package repository with byte-compile warnings, I would have someone tell me to fix these warnings. So I got fed up with this and created some tests that verify no compiler warnings exist.

 I linked this together with Travis CI so that whenever I push my changes, I run a full test suite for my Emacs Lisp and also make sure no warnings exist. To setup Travis CI, I created this .travis.yml dotfile:


.travis.yml basically tells Travis CI to install a particular version of Emacs, then run "make test". Thats it. Here is what the makefile looks like:


The Makefile test target essentially runs all the tests under the test/ directory while using the .cask directory as a sandbox, which will have all the necessary emacs lisp packages installed for your project. Now all you have to do is put tests under a test/ directory in your git project. Here is the test I use to verify my package builds with no warnings:


Now all you have to do is run make test under your emacs package repository and you can ensure you have no warnings that other people might run into when running package-install. Since the makefile test target is dynamic, any test-file.el you put under the tests directory will now be run in Travis CI.

If you want to run them manually you can just run "make test/test-file.el". If you want to be really fancy, you can setup make test to be run as part of your git pre-commit hooks.

Hope this helps!

Tuesday, October 27, 2015

Tying together custom knockout bindings

I ran across a problem recently where I had a bindingHandler that I wanted to depend upon other bindings on the same node. Here is an example:
<div data-bind="myHandler, visible: !enabled()">
</div>
So in general, you have access to all the other bindings when inside a bindinghandler:
ko.bindingHandlers.myHandler = {
  init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
    var visible = allBindings().visible;
  }
};
The problem with calling allbindings().someBinding, is that you get the *result* of the binding. In my case, I wanted to subscribe to when any updates happen to the entire binding: "visible: !enabled()"

Knockout handles this for a normal bindingHandler in the update function. If the valueAccessor changes, update gets fired.

So I needed to have a way to get access to the valueAccessor of a different bindingHandler on the same node. The solution I came up with involved asking the knockout bindingProvider to give me the value accessors, then subscribing to the valueAccessor I wanted to listen to:
ko.bindingHandlers.myHandler = {
  init: function(element, valueAccessor, allBindings, viewModel, bindingContext) {
    var bindingAccessors = ko.bindingProvider.instance.getBindingAccessors(element, bindingContext),
       visibleAccessor = bindingAccessors.visible,
       myObservable = viewModel.someObservable; // For complex reasons, this observable needs to update based on another binding in the same node 
    if (visibleAccessor) {
      // This simulates what the knockout update bindingHandler does:
      ko.computed(function() {
        myObservable(ko.unwrap(visibleAccessor)); // the computed fires because we have visibleAccessor in our scope
      });
    }
  }
};

So now when the visible binding updates, we get the result of the valueAccessor instead of a single result at one point in time of a valueAccessor (shown in the first js code block).

I know this sounds complicated, but this was exactly what I was looking for to be able to access the valueAccessors of the same node. Using allBindings().someHandler was forcing me to make trivial pureComputed variables:
var MyViewModel = function() {
  this.enabled = ko.observable(false);
  this.disabled = ko.pureComputed(function() {
    return !this.enabled();
  }, this);
}

I really wanted this trivial code to exist in the template, not in the viewModel. So the solution using getBindingAccessors allows me to keep this code in the templates.

Wednesday, September 2, 2015

Add on-the-fly bindings in knockout

Let's say you want to add a special css class that you only need in a certain context in a certain template.  Instead of defining the css class on the viewModel: this.extraCssClass = 'myClass'; Consider doing this:

<!-- ko with: function() { $context.extraCssClass = {'my-css-class': true}; return $data } -->
{{ my/sub/template.mustache }}
<!-- /ko -->

What is happening here is that we use the with binding to call a function that installs extraCssClass on $context.  Then we return $data to continue using the same $data scope with inner data-bind calls.

Then in template.mustache, you can consume the css class by checking $context:
<div data-bind="css: $context.extraCssClass || {}"></div>

I don't think this is *too* hacky, and it allows you inject custom variables without adding special logic to your viewmodel.
Let me know what you think!

Tuesday, October 7, 2014

Trying to use math to solve my problems

I recently encountered a moderately hard problem that required geometry to solve it. The basic idea is that I hit an API request that returns a list of points. These points when connecting the dots draw several irregular polygons:


But the idea behind these points is that people want to see it as a "Heatmap" where the data represents something round, not square. But if you apply a technique such as interpolating using Bezier curves, without pruning, looks something like this:


Definitely curvy, but not exactly what we are looking for. A previous colleague of mine encountered this problem before I did, and they tried to apply a strategy to discard points that are unneeded. This sounds easier said than done. From what I could tell, the strategy was to figure out a ratio of distances between 3 points. If the ratio exceeded a discard threshold, we would discard the middle point:


This turned out to be too aggressive and would prune points we wanted to keep:


If you look at the original picture, some of the polygons were just squares. If the polygon we were interpolating through was a regular convex polygon, like a square, we shouldn't remove any points. Using the Bezier interpolation from square points should make a more oval shape. But previous algorithm was converting the squares to an irregular shape for no reason.

I had to think of a new strategy. There needed to be a way to get rid of the inner jagged corners. I kept thinking there was some text book algorithm out there that already solved this problem, but I couldn't find one. The problem set I had seemed simple enough. All the points were a perimeter of a shape.

So since we are dealing with a perimeter, I don't think this problem is a convex hull problem. For a convex hull problem you are trying to remove all inner points. But it looked like all we are trying to do is remove bad jagged points.

I tweaked the pruning algorithm to just walked over every 3 points (current, previous, next). With three points you can determine two vectors: One going from the previous to current, and one going from the current to next. Using this, you can determine the dot product, which can tell you if they form a right angle.

So now I was able to determine which points formed a right angle, but you still need to figure out which right angles are bad. I tried to enumerate all the cases which were bad angles:


This was on the right path, but not exactly correct.  It turned out that sometimes the inverse behavior was observed.  The algorithm appeared to be pruning the outside angles instead of the inside angles.

Eventually I determined that the direction of the points matters.  If you generally are going in a clockwise direction, then you  should prune locally counter clockwise angles.  But if you are generally going in a counter clockwise direction, you need to prune locally clockwise angles.

If you took the same picture above and traversed it backwards and removed locally counter clockwise angles, you would remove the wrong middle points:


So it seemed that direction mattered.  I had to find an algorithm to figure out which direction the points were going.  I ended up finding something similar to the Shoelace Algorithm on this stack overflow answer.  When you calculate this, if the sums add up to a positive number you are going clockwise, and if its negative, you are going counter clockwise (this is flipped on a browser since the y axis is inverted).

Ok so, after using the shoelace algorithm to determine direction and the above clockwise or counter clockwise angles to determine pruning, the results still didn't give reasonable shapes:



Some shapes still looked weird.  After much debugging, the next best option available was to prune points of duplicate slope.  What was happening was that too many points were sequentially going in the same direction.  Removing the extra points and interpolating between the outer points looks something like this:

It still doesn't feel perfect, but these were the results:

The question I have now is:  is there an algorithm I'm missing here?  Has this problem been solved before and I don't have the right vocabulary to google hard enough?

From what I can see, for each polygon I draw, I iterate over N points to calculate the clockwise/counter clockwise direction.  Then I iterate N points again to prune inner angles.  Then iterate over a subset of N points (n) to remove duplicate slopes.  This totals up to N + N + n, or equals O(N).

So from a performance standpoint, I feel like the algorithm is solid.  From a math standpoint, I feel like I'm doing it wrong.  Maybe some sort of calculus or trigonometry needs to be applied?  Or is it good enough?  What do you think?


Monday, February 17, 2014

Keyword/Named arguments in programming languages that don't support it

Let's say you come from a scripting background like python or perl. Those languages have a cool feature called keyword arguments, which essentially allows you to pass a hash/dictionary of key/values to a function without declaring an object: This language feature saves you the effort of creating an on the fly data structure to pass arguments to a function: This is essentially what you have to do in javascript to accomplish keyword args: But what if you wanted to not have to always create an on the fly object? What other tools do we have available? Well, you could still use variable arguments: So, knowing this, we could write a shim for javascript to parse named args with arguments: You may be thinking, "Big deal, I don't need to use arguments, I can just pass on the fly objects in javascript thank you very much."
But this example was really a fake out. You can actually use this example with a non-scripting language, like Java, since they also allow you to use variable length arguments: Why would you want something like this in java? Well, lets say you want to create a function that creates a test object for a junit test. The object is essentially a POJO, but you don't want to write code like this: Wouldn't it be better if you could write it like this? Then all you need to do is create the getTestObject method. One caveat is that you can either lookup the public setters by name, or you have to expose the private variables. I am for exposing private variables since we are in test land. You probably shouldn't change field access control in production code. So with a little bit of utility code, you can create static constructor functions that use variable length Object arguments to simulate named arguments. I'm sure there are Java purists that will say this violates object oriented programming principles, but I argue that this is just for test code and is just a means to an end of creating on-the-fly objects with less typing/effort.

Sunday, January 12, 2014

SVG Element Transparencies

I solved a really interesting problem the other day that eluded me previously.  I hope this will help someone in the future so they don't make my mistake.

Let's say you have an SVG tag with multiple transparent shapes in it.  The way you set a shape's transparency is by adding fill-opacity style to the element.  

<svg height="50" width="100">
<line x1="0" y1="20" x2="100" y2="20" stroke="black"></line>
<circle cx="50" cy="25" r="10" fill="red" fill-opacity="0.5"></circle>
</svg>

The problem with adding the fill-opacity to multiple overlapping elements is that their fills blend together and sometimes this ruins the purpose of your image. If you draw one element over another, it is hard to say which color is on top of another color.


<svg height="50" width="100">
<line x1="0" y1="20" x2="100" y2="20" stroke="black"></line>
<circle cx="50" cy="25" r="10" fill="red" fill-opacity="0.5"></circle>
<circle cx="55" cy="25" r="10" fill="yellow" fill-opacity="0.5"></circle>
<circle cx="60" cy="25" r="10" fill="blue" fill-opacity="0.5"></circle>
<circle cx="65" cy="25" r="10" fill="green" fill-opacity="0.5"></circle>
</svg>
In the end, what I really wanted was to group all these elements and then apply the opacity to the parent element. This made each individual color stand out, yet the overall shapes were transparent so you could see a background behind it.

<svg height="50" width="100">
<line x1="0" y1="20" x2="100" y2="20" stroke="black"></line>
<g style="opacity: 0.5">
  <circle cx="50" cy="25" r="10" fill="red"></circle>
  <circle cx="55" cy="25" r="10" fill="yellow"></circle>
  <circle cx="60" cy="25" r="10" fill="blue"></circle>
  <circle cx="65" cy="25" r="10" fill="green"></circle>
</g>
</svg>
Why would you want to use one strategy versus the other? If you want the colors to appear distinct but transparent, you group together a bunch of elements and add a transparency to their parent group. But if you want the colors to blend, then you should add the opacities individually.
By the way, this problem isn't really an SVG problem. You see the same results in plain old html. :-D

Wednesday, May 15, 2013

The day true became false


I occasionally get to work on a module that heavily uses google closure.  I think the original architect wanted to use it since it because google provides a compiler and a pretty robust library for doing cross browser javascript.  I also like using closure when dealing with multiple people working on a large javascript project because the compiler can enforce some rules that keep certain bad code from being checked in (besides also enforcing a linter).

The particular issue I dealt with was "fun" because I wasted an entire day trying to track down a defect that at the surface appeared to be a cross browser issue.  One action would show a page in firefox, but it would show a blank page in chrome.

Initially, the strategy I took was looking at the ajax requests in developer tools and making sure that they were firing correctly at the right times.  After determining that the requests were being fired, I then tried to figure out why the callbacks for the requests were not being fired.

One thing to note was that we were able to reproduce the issue when we were using the minified code, but were not able to reproduce the issue when the code was un-minified.

So, we eventually got in a window.console.log spiral, where we would put a log in a closure library module, and compile the source, test it, and eventually say WTF, and rinse repeat...

At one point, my pair programming partner noticed that one of the variables in the xhrio.js library was not behaving correctly.

The source was this:

goog.net.XhrIo.prototype.send = function(url, opt_method, opt_content,
                                         opt_headers) {
...
    this.active_ = true;
    window.console.log('this.active_:', this.active_);
    // should log true
This code was logging true, then all of a sudden, it logged 0. We were baffled. Why was true turning into false! Even other functions were failing.
...
   window.console.log('this.isComplete', this.isComplete());
   // showed 0
   if (this.isComplete()) {
   }
I eventually decided to go straight to the minified source and find out what isComplete was doing:
var b=Zh(a),c;a:switch(b){case 200:case 201:case 202:case 204:case 206:case 304:case 1223:c=j;break a;default:c=l
I figured that somewhere in this file was a variable l, or a variable j that represented true, or false. Sure enough, at the top of the file, I saw this:
function e(a){throw a;}var h=void 0,j=!0,k=null,l=!1;

!0 === true, and !1 === false.

 I then figured that other code minifiers were doing this strategy. I looked at jQuery minified, and sure enough, everywhere was littered with !0 and !1 in the jQuery code in many truth checks and returns. I then figured that maybe google has already addressed this issue, so I filed an issue with them.

They told me that you have to wrap your module in a private scope: https://code.google.com/p/closure-compiler/wiki/FAQ#When_using_Advanced_Optimizations,_Closure_Compiler_adds_new_var
 What you do is pass a command line argument to the compiler.jar: --wrap-output "(function(){%output%})();"

 But I did some digging and had a daily WTF moment... We already had this wrap output, but it was removed a few months ago, because we created a separate closure module. I don't know the reasoning behind why this was removed, but I do think there should be an option to choose to minify special keywords, or even just to do the simple find/replace of true false with !0/!1.

Anyway, now you know that if you don't use wrap-output, you could be in for a lot of pain. All you need is a third-party library to not use the var keyword on simple variables.  If you don't use the var keyword, the variable will change the window scoped variable, which was where the closure minified variables were living.

If this for loop only went to 0 and never iterated greater than 0:
for(i=0; i < something.length; i++) {
   for(j=0; j < somethingelse.length; j++) {
     //...
   }
}
You would have a bad day trying to find out why true is now false...