Create SVGs without translation for fabric.js with Inkscape

When using Inkscape to create SVGs, your path’s are ofter grouped together and translated. This translation might cause problems when using the SVG with fabric. Let’s assume a simple cross, consisting of two crossed lines. Inkscape produces something like this out of the box.

I found a question on Stackoverflow which solves this (http://stackoverflow.com/questions/13329125/removing-transforms-in-svg-files)

The trick is to remove the g tags around your paths, reopen the svg in Inkscape, rearrange the shapes and to save again. Now the transform statement is gone and you can “re”-edit your SVG using Inkscape.

Doing like a module with Rails asset-pipeline and CoffeeScript

Some days ago, I played around with requirejs. Although I liked to group my client side code into AMD modules, using the (only) appropriate gem ‘requirejs-rails’ doesn’t felt that good. Main reasons for that

  • last commit some months ago
  • no Rails 4 support so far
  • asset precompilation failed with my concrete project

To be fair, that asset:precompile issues with requirejs-rails were caused by some third-party Gems I used. But anyway, they compiled under stock Rails and they failed when using requirejs-rails. So it seems that sprockets integration isn’t that flawless with requirejs-rails.

So I thought about how to get some module-like-feeling, with just the tools, Rails delivers out of the box. Let’s approach.

Assumptions

Let’s assume the following class hierarchy

  Component
  SeatPlan < Component
  SeatPlan.InputDataSanitizer
  (SeatPlan.Ui)

SeatPlan should be a subclass of Component. Also I wanted InputDataSanitzer to be a class on its own, but located “below” SeatPlan, because it only sanitizes SeatPlan input data. Think of some kind of namespacing. Same for Ui. The only difference between these two is, that SeatPlan should store a reference to DataInputSanitizer, where as for Ui it should only store a concrete instance.

AMD/requirejs

With AMD, I would write something like this

Imitate AMD with sprockets

Without something like AMD, you have to put these classes somewhere in global scope in order to access them from within your client-side code. Let’s put them below window.app

  window
  + app
    + Component
    + SeatPlan
      + InputDataSanitizer
      + Ui

Combining Rails asset-pipeline directives and some CoffeeScript we can imitate something like a module system. Let’s look at the code.

Thanks to CoffeeScripts ‘do’, which allows us to explicitly create a closure, it feels almost like doing AMD. Just without the need for any additional Gem/module loader.

Ungroup objects grouped programatically with fabric.js

I wanted to group some objects, manipulate that group (center, rotate) and ungroup the objects in order to handle events on each object, rather that on the whole group. I spent some time to get this working, so I think, it’s worth sharing.

tl;dr

// grab the items from the group you want to "ungroup"
items = group._objects;
// translate the group-relative coordinates to canvas relative ones
group._restoreObjectsState();
// remove the original group and add all items back to the canvas
canvas.remove(group);
for(var i = 0; i < items.length; i++) {
  canvas.add(items[i]);
}
// if you have disabled render on addition
canvas.renderAll();

I’ve created a short demo gist, which can be execute withing fabric’s kitchensink demo.

How to use http compression with Savon

Doing a lot of SOAP requests using Savon in Rails I wondered, if it is possible to reduce the size of the SOAP responses using http compression (gzip or deflate). Short answer, it is. Long answer, you have to know how to enable it and there a multiple ways to do the trick.

The following is based on Savon 1.0, so if you are using Savon >= 2, things may have changed.

First some background information. In version 0.7.9 Savon added support for gzip compression, so I first tried this.

Savon::Client.new "http://mydomain/myService?wsdl", :gzip => true

Unfortunately, that doesn’t work and Savon complains about wrong argument number. Digging into Savons code it showed, that you can only pass a block as the second parameter. But what do put in there ?

Savon internally uses HTTPI to abstract several Ruby http clients. When you want to mess with http in Savon, you have to mess with HTTPI. Now back to the question what do put in Savons Client block to enable http compression ? The answer is, from inside the block, you can access ‘http’, which is in fact an instance of HTTPI::Request.

HTTPI::Request provides some methods to set and alter the requests http headers. That means, setting http header options for a Savon Client would look like this

Savon::Client.new "http://mydomain/myService?wsdl" do
  http.headers = { 'Accept-Encoding' => 'gzip, deflate' }
end

Of cause, you can set other headers this way, too. It’s just a hash.

HTTPI::Request offers a shortcut method for setting the http header to indicate http compression. It’s called ‘gzip’. So our code from above code also be written like this.

Savon::Client.new "http://mydomain/myService?wsdl" do
  http.gzip
end

Ok, we are done. Quit simple if you know where to put it :)

Last but not least you could also enable compression ‘per request’ using Savons soap request hook. Savon offers exactly one hook called :soap_request. To let the documentation speak, it acts like an around filter wrapping the POST request executed to call a SOAP service.

The benefit of interfering the request using this hook is, that you can enable http compression ‘globally’ for all instances of Savon::Client.

Savon.configure do |config|
  config.hooks.define(:enable_compression, :soap_request) do |callback, request|
    # we have to use request.http instead of http
    request.http.gzip
    # and trigger to actual request on our own
    response = callback.call
  end
end

Using http compression the size of my SOAP responses could be noticeably reduced. In fact some compressed responses are only 1/10 of their original size. So this is a very cheap option to save bandwidth and maybe speed up request processing.

From zero to Solr: A hands-on tutorial

The intention of this tutorial is to give you a quick guide on how to set up Apache Solr and query xml based data, without digging too much into details. As everywhere, the are several ways to accomplish this goals. The steps mentioned below are therefore just an example and don’t claim to be “best practise”.

  1. Install Solr 4 on ubuntu 12.04
  2. Prepare your data for being indexed
  3. Setup the schema
  4. Index some data
  5. Customize the default query handler

Install Solr 4 on ubuntu 12.04

This section is based on a blog post by Tomasz Muras.

The following instructions refere to a vanilla installation of Ubuntu LTS server version (12.04) i386, just using the “OpenSSH server” package set during install. In order to install and run Solr, we need tomcat and curl.

  • sudo apt-get install tomcat6 curl

Next download Solr from http://lucene.apache.org/solr. At the point of writing, the current version is 4.0.0 The following command will download Solr to your home directory from one of the many mirrors available. Adapt the url if needed.

Now we have the Solr tgz in our home. Let’s put it somewhere, e.g. to /opt/solr

  • sudo mkdir -p /opt
  • sudo tar -xvzf ~/apache-solr-4.0.0.tgz -C /opt

In order to keep the following steps independent from the actual Solr version, let’s create a symbolic link in opt. Adapt this to your actual Solr version.

  • cd /opt
  • sudo ln -s apache-solr-4.0.0 solr

Solr comes with example configurations which can be easily used to get started. Therefore, we need to copy the appropriate files to Solr’s homedir.

  • cd /opt/solr
  • sudo cp -r example/solr/* ./
  • sudo cp example/webapps/solr.war ./

The example shipped with Solr uses a single “core” named collection1. Without trying the explain what Solr cores are, think of it as way to host multiple indices within a single Solr instance.

Let’s change collection1′s name to something more friendly, e.g., catalog.

  • cd /opt/solr
  • sudo mv collection1 catalog

But that’s not all. You have to modify the copied example config /opt/solr/solr.xml as well. Simply change every occurrence of “collection1″ to “catalog” below the “cores” element. After all, the it should look like this

Solr needs a directory to store its data. Let’s create the directory and set appropriate rights for Solr to be able to access it.

  • sudo mkdir /opt/solr/data
  • sudo chown tomcat6 /opt/solr/data

Now tell Solr about the data directory by add/edit the “dataDir” element in your core’s main config file, which is in this case /opt/solr/catalog/conf/solrconfig.xml. The element should look like this.

Last but not least you have to tell tomcat about your new Solr instance. Therefore, create a file named /etc/tomcat6/Catalina/localhost/solr.xml with the following content.

Restart tomcat and Solr should be waiting for on port 8080.

  • sudo /etc/init.d/tomcat6 restart

Prepare your data for beeing indexed

The following steps assume, that the data which should be indexed is stored as xml files, each file representing a single item.

Now that you have Solr up-and-running, let’s index some data. But beware, Solr expects data to be in a special format. You cannot simply push your own xml files into it without preprocessing them first. So what does Solr expect ? Have a look at the xml files in the exampledocs directory (which is in our case located at /opt/solr/example/exampledocs), for example monitor.xml

  • cat /opt/solr/example/exampledocs/monitor.xml

which will show the following xml file.

As you can see, you have an “add” element, which has one or many “doc” elements, which have one or many “field” elements with an attribute called “name” and a value. A minimal xml file to be indexed by Solr would look like this.

The problem is, that your data is probably not in that format, so you have to convert it prior to loading it into Solr. When the input data is xml, one way to accomplish this is by using XSLT. Doing complex transformations using XSLT is a topic of it’s own, so let’s assume the following simple xml input.

The following XSLT will transform this xml to an xml file which can be loaded into Solr.

By the way, if you want to hack some xsl, try http://www.xmlper.com an online xsl/xml editor with live preview of your transformed xml.

Ok, now put your xsl file somewhere, where we can use it later on. There is already a directory in our example core (which is named catalog) for xslt files, located at /opt/solr/catalog/conf/xslt, so save the xslt file there and give it an expressive name, like input_to_solr.xsl.

To check if your stylesheet works as expected you can use xsltproc. Install the package and do some transformation on your sample input xml located at ~/input.xml.

  • sudo apt-get install xsltproc
  • xsltproc /xsltproc /opt/solr/catalog/conf/xslt/input_to_solr.xsl ~/input.xml

This should give the following xml file, which accords to what Solr expects.

Setup the schema

Before trying to index some data, we have to tell Solr which fields we are using. The corresponding schema configuration file is located at /opt/solr/catalog/conf/schema.xml. Remember that we have a core named “catalog” and the schema file is just located in the conf directory below that cor’s root.

Solr already knows certain fields, e.g. id or subject, but some fields are missing in the default schema, like “type” or “format”. But even the fields which the default schema defines may not match our input data, so lets alter the default schema to match our input.

In /opt/solr/catalog/conf/schema.xml, add the following lines inside the “fields” element.

But we are not finished. The Solr default schema defines a field named “subject”, but does not declare this field to be “multi valued”, as in our input xml. So we need to alter the existing field definition and add the “multiValued” attribute set to “true”. After editing, the line should look like this

Now that the schema file corresponds our input, restart tomcat to ensure, the new schema is loaded.

  • sudo /etc/init.d/tomcat6 restart

Index some data

Now, that you are able to transform your input to something, Solr understands, you could apply this transformation to all input files and POST them to Solr. Let’s to this for our input.xml.

  • xsltproc /opt/solr/catalog/conf/xslt/input_to_solr.xsl ~/input.xml|curl "http://localhost:8080/solr/update?commit=true" --data-binary @- -H 'Content-type:application/xml'

Let’s look at this command. We transform our ~/input.xml file using xsltproc and the stylesheet located at /opt/solr/catalog/conf/xslt/input_to_solr.xsl and pipe the result to curl.

Curl does a POST (with “Content-Type” header set to “application/xml”) to http://localhost:8080/solr/update with “commit=true” and taking STDIN data from the pipe as –data-binary (“@-” says, take file from STDIN).

The result should be as follows (except QTime, which may be different for you).

Congratulation, you have indexed your first file. Let’s search it using curl.

This should give you a nice JSON representation of the input file, like this

Customize the default query handler

At the moment, you can query items by specifying field:value pairs like “status:foo_status”. But what you probable want is to query for terms in multiple (or all) fields, without naming them. This can be accomplished by setting some smart defaults for a query handler in your core’s Solr config. In this tutorial, we have single core named “catalog”, so the config would be /opt/solr/catalog/conf/solrconfig.xml.

Search for the definition of the “requestHandler” with name=”/query”. This element has a lst child named “defaults”. Here you can define query parameters, which should be assumed if not given in the request.

Let’s combine the ability to define defaults with a different query mode like (e)dismax. Change the “requestHandler” element for /query to look like this.

Now, if you issue a query, the dismax query mode is used. This mode provides the qf, where you can specify the fields, where Solr should search for the query term. In this example all fields should be searched. With this request handler, you can query items by simply doing

A last note on the qf parameter. You can define “boost” values for each field, which will make some fields more relevant than others when doing a query. Boost values can be written by using ^, e.g. title^2.

Include files from git submodules when building a ruby gem

Today I ran into the following situation. I wanted to build a gem with bundler which has some vendor assets included as git submodules. My directory structure was something like that

.git/
app/*
lib/*
vendor/assets/javascripts/es5-shim (submodule)
vendor/assets/javascripts/pathjs (submodule)

When doing

rake build

the files from the submodules where not included in the gem, because the gemspec specifies the files as follows.

gem.files = `git ls-files`.split($\)

Unfortunately, git ls-files does not list files from submodules and that’s why, these files are not included in the gem.

I solved this by utilizing git submodules’s foreach statement in combination with some ruby string manipulation.

The resulting gemspec looks like this.

Matt Connolly suggested a shorter version of the gemspec. Have a look at his comment.

Using underscore.js to wrap node.js child_process.spawn

As I was writing jake files to build my projects, I was faced with the problem of spawning new child processes using node’s child_process.spawn. In order to get the output of these child processes on the console, you have to attach on the child process’s stdout ‘data’ event. So my code looked like this.

var child_process = require('child_process');

var child = child_process.spawn(cmd, params.split(' '), {
 cwd: '.'
});

child.stdout.on('data', function(data) {
  process.stdout.write('' + data);
});

child.stderr.on('data', function(data) {
  process.stderr.write('' + data);
});

That does the job. But you have to write this for every process you want to spawn. Everytime the same .stdout.on(‘data’, function(data) { … } stuff. So I thought this would be a great chance to play with the wrap function of underscore.js, so that stdout and stderr would be written to the console per default. The resulting code looks like this.

var child_process = require('child_process');
var underscore = require('underscore');

child_process.spawn = underscore.wrap(child_process.spawn, function(func) {
  // We have to strip arguments[0] out, because that is the function
  // actually being wrapped. Unfortunately, 'arguments' is no real array,
  // so shift() won't work. That's why we have to use Array.prototype.splice 
  // or loop over the arguments. Of course splice is cleaner. Thx to Ryan
  // McGrath for this optimization.
  var args = Array.prototype.splice.call(arguments, 0, 1);

  // Call the wrapped function with our now cleaned args array
  var childProcess = func.apply(this, args);

  childProcess.stdout.on('data', function(data) {
    process.stdout.write('' + data);
  });

  childProcess.stderr.on('data', function(data) {
    process.stderr.write('' + data);
  });

  return childProcess;
});

…

var child = child_process.spawn(cmd, params.split(' '), {
    cwd: '.'
 });

Now everytime you use child_process.spawn, stdout and stderr are tied to process.stdout and process.stderr automatically.