Setup ruby (oci8, sequel) for Oracle on ubuntu 14.04

Download and install Oracle instant client

  • go to http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html
  • choose your environment
    • be aware that not all environments are update, for instance “Linux AMD64” is not, whereas “Linux X86-64” is
  • accept the license agreement
  • download the necessary rpm files (not the zip archives) (you may be prompted to login with your Oracle account)
    • Instant Client Package – Basic
      • oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
    • Instant Client Package – SQL*Plus
      • oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm
    • Instant Client Package – SDK
      • oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm
  • install alien to be able to install the rpm’s
    • sudo apt-get install alien
  • install the rpms using alien
    • sudo alien -i oracle-instantclient12.1-basic-12.1.0.2.0-1.x86_64.rpm
    • sudo alien -i oracle-instantclient12.1-sqlplus-12.1.0.2.0-1.x86_64.rpm
    • sudo alien -i oracle-instantclient12.1-devel-12.1.0.2.0-1.x86_64.rpm

Make libraries available

  • sudo touch /etc/ld.so.conf.d/oracle.conf && sudo vi /etc/ld.so.conf.d/oracle.conf
  • add the following
    • /usr/lib/oracle/12.1/client64/lib
  • sudo ldconfig

Make headers available

As the original ubuntu guide indicates, some libraries expect the headers to reside below the main oracle home directory.

  • sudo ln -s /usr/include/oracle/12.1/client64 /usr/lib/oracle/12.1/client64/include

Setup ORACLE_HOME

  • sudo touch /etc/profile.d/oracle.sh && sudo vi /etc/profile.d/oracle.sh
  • add the following lines
    • export NLS_LANG=”AMERICAN_AMERICA.UTF8″
    • export ORACLE_HOME=/usr/lib/oracle/12.1/client64
    • export PATH=$PATH:$ORACLE_HOME/bin

SSH tunnel into your oracle box

Probably you don’t want to expose the necessary ports on your oracle box to the outside (not even to your development box) . So just quick recap of how to tunnel the needed port using ssh.

  • ssh -L 1521:localhost:1521 your_oracle_box

Install necessary gems

  • gem install ruby-oci8
  • gem install sequel

You may also want to install other gems like activerecord-oracle_enhanced-adapter.

Test the whole setup

Run the following in irb to check, if everything works as expected. This expects that there is a schema some with a table named table.

require "oci8"
require "sequel"

DB = Sequel.oracle("your_sid", user: "your_user”, password: "your_password", host: "localhost", port: 1521)

puts DB["select * from some.table"].count

 

Advertisement
Setup ruby (oci8, sequel) for Oracle on ubuntu 14.04

Create SVGs without translation for fabric.js with Inkscape

When using Inkscape to create SVGs, your path’s are ofter grouped together and translated. This translation might cause problems when using the SVG with fabric. Let’s assume a simple cross, consisting of two crossed lines. Inkscape produces something like this out of the box.

https://gist.github.com/msievers/8360079

I found a question on Stackoverflow which solves this (http://stackoverflow.com/questions/13329125/removing-transforms-in-svg-files)

The trick is to remove the g tags around your paths, reopen the svg in Inkscape, rearrange the shapes and to save again. Now the transform statement is gone and you can “re”-edit your SVG using Inkscape.

Create SVGs without translation for fabric.js with Inkscape

Doing like a module with Rails asset-pipeline and CoffeeScript

Some days ago, I played around with requirejs. Although I liked to group my client side code into AMD modules, using the (only) appropriate gem ‘requirejs-rails’ doesn’t felt that good. Main reasons for that

  • last commit some months ago
  • no Rails 4 support so far
  • asset precompilation failed with my concrete project

To be fair, that asset:precompile issues with requirejs-rails were caused by some third-party Gems I used. But anyway, they compiled under stock Rails and they failed when using requirejs-rails. So it seems that sprockets integration isn’t that flawless with requirejs-rails.

So I thought about how to get some module-like-feeling, with just the tools, Rails delivers out of the box. Let’s approach.

Assumptions

Let’s assume the following class hierarchy

  Component
  SeatPlan < Component
  SeatPlan.InputDataSanitizer
  (SeatPlan.Ui)

SeatPlan should be a subclass of Component. Also I wanted InputDataSanitzer to be a class on its own, but located “below” SeatPlan, because it only sanitizes SeatPlan input data. Think of some kind of namespacing. Same for Ui. The only difference between these two is, that SeatPlan should store a reference to DataInputSanitizer, where as for Ui it should only store a concrete instance.

AMD/requirejs

With AMD, I would write something like this


# assets/javascript/Component.js.coffee
define ->
class
methodEverybodyShouldHave: ->
#
# assets/javascript/SeatPlan/InputDataSanitizer.js.coffee
define ->
class
sanitize: (data) ->
#
# assets/javascript/SeatPlan/Ui.js.coffee
define ->
class
constructor: (el) ->
#
# assets/javascript/SeatPlan.js.coffee
define ['Component', 'SeatPlan/InputDataSanitizer', 'SeatPlan/Ui'], (Component, InputDataSanitizer, Ui) ->
class extends Component
constructor: (el)
@InputDataSanitizer = InputDataSanitizer
@ui = new Ui(el)

Imitate AMD with sprockets

Without something like AMD, you have to put these classes somewhere in global scope in order to access them from within your client-side code. Let’s put them below window.app

  window
  + app
    + Component
    + SeatPlan
      + InputDataSanitizer
      + Ui

Combining Rails asset-pipeline directives and some CoffeeScript we can imitate something like a module system. Let’s look at the code.
https://gist.github.com/msievers/6120667
Thanks to CoffeeScripts ‘do’, which allows us to explicitly create a closure, it feels almost like doing AMD. Just without the need for any additional Gem/module loader.

Doing like a module with Rails asset-pipeline and CoffeeScript

Ungroup objects grouped programatically with fabric.js

I wanted to group some objects, manipulate that group (center, rotate) and ungroup the objects in order to handle events on each object, rather that on the whole group. I spent some time to get this working, so I think, it’s worth sharing.

tl;dr

// grab the items from the group you want to "ungroup"
items = group._objects;
// translate the group-relative coordinates to canvas relative ones
group._restoreObjectsState();
// remove the original group and add all items back to the canvas
canvas.remove(group);
for(var i = 0; i < items.length; i++) {
  canvas.add(items[i]);
}
// if you have disabled render on addition
canvas.renderAll();

I’ve created a short demo gist, which can be execute withing fabric’s kitchensink demo.


// clear canvas
canvas.clear();
// add red rectangl
canvas.add(new fabric.Rect({
width: 50, height: 50, left: 50, top: 50, fill: 'rgb(255,0,0)'
}));
canvas.add(new fabric.Rect({
width: 50, height: 50, left: 110, top: 50, fill: 'rgb(255,0,0)'
}));
var group = new fabric.Group([
canvas.item(0).clone(),
canvas.item(1).clone()
]);
canvas.clear().renderAll();
canvas.add(group);
// move group, rotate group
group.centerH();
group.centerV();
group.rotate(70);
// ungrouping is here
var items = group._objects;
group._restoreObjectsState();
canvas.remove(group);
for(var i = 0; i < items.length; i++) {
canvas.add(items[i]);
}
canvas.renderAll();

Ungroup objects grouped programatically with fabric.js

How to use http compression with Savon

Doing a lot of SOAP requests using Savon in Rails I wondered, if it is possible to reduce the size of the SOAP responses using http compression (gzip or deflate). Short answer, it is. Long answer, you have to know how to enable it and there a multiple ways to do the trick.

The following is based on Savon 1.0, so if you are using Savon >= 2, things may have changed.

First some background information. In version 0.7.9 Savon added support for gzip compression, so I first tried this.

Savon::Client.new "http://mydomain/myService?wsdl", :gzip => true

Unfortunately, that doesn’t work and Savon complains about wrong argument number. Digging into Savons code it showed, that you can only pass a block as the second parameter. But what do put in there ?

Savon internally uses HTTPI to abstract several Ruby http clients. When you want to mess with http in Savon, you have to mess with HTTPI. Now back to the question what do put in Savons Client block to enable http compression ? The answer is, from inside the block, you can access ‘http’, which is in fact an instance of HTTPI::Request.

HTTPI::Request provides some methods to set and alter the requests http headers. That means, setting http header options for a Savon Client would look like this

Savon::Client.new "http://mydomain/myService?wsdl" do
  http.headers = { 'Accept-Encoding' => 'gzip, deflate' }
end

Of cause, you can set other headers this way, too. It’s just a hash.

HTTPI::Request offers a shortcut method for setting the http header to indicate http compression. It’s called ‘gzip’. So our code from above code also be written like this.

Savon::Client.new "http://mydomain/myService?wsdl" do
  http.gzip
end

Ok, we are done. Quit simple if you know where to put it 🙂

Last but not least you could also enable compression ‘per request’ using Savons soap request hook. Savon offers exactly one hook called :soap_request. To let the documentation speak, it acts like an around filter wrapping the POST request executed to call a SOAP service.

The benefit of interfering the request using this hook is, that you can enable http compression ‘globally’ for all instances of Savon::Client.

Savon.configure do |config|
  config.hooks.define(:enable_compression, :soap_request) do |callback, request|
    # we have to use request.http instead of http
    request.http.gzip
    # and trigger to actual request on our own
    response = callback.call
  end
end

Using http compression the size of my SOAP responses could be noticeably reduced. In fact some compressed responses are only 1/10 of their original size. So this is a very cheap option to save bandwidth and maybe speed up request processing.

How to use http compression with Savon

From zero to Solr: A hands-on tutorial

The intention of this tutorial is to give you a quick guide on how to set up Apache Solr and query xml based data, without digging too much into details. As everywhere, the are several ways to accomplish this goals. The steps mentioned below are therefore just an example and don’t claim to be “best practise”.

  1. Install Solr 4 on ubuntu 12.04
  2. Prepare your data for being indexed
  3. Setup the schema
  4. Index some data
  5. Customize the default query handler

Install Solr 4 on ubuntu 12.04

This section is based on a blog post by Tomasz Muras.

The following instructions refere to a vanilla installation of Ubuntu LTS server version (12.04) i386, just using the “OpenSSH server” package set during install. In order to install and run Solr, we need tomcat and curl.

  • sudo apt-get install tomcat6 curl

Next download Solr from http://lucene.apache.org/solr. At the point of writing, the current version is 4.0.0 The following command will download Solr to your home directory from one of the many mirrors available. Adapt the url if needed.

  • cd ~ && curl -O http://mirror.netcologne.de/apache.org/lucene/solr/4.0.0/apache-solr-4.0.0.tgz

Now we have the Solr tgz in our home. Let’s put it somewhere, e.g. to /opt/solr

  • sudo mkdir -p /opt
  • sudo tar -xvzf ~/apache-solr-4.0.0.tgz -C /opt

In order to keep the following steps independent from the actual Solr version, let’s create a symbolic link in opt. Adapt this to your actual Solr version.

  • cd /opt
  • sudo ln -s apache-solr-4.0.0 solr

Solr comes with example configurations which can be easily used to get started. Therefore, we need to copy the appropriate files to Solr’s homedir.

  • cd /opt/solr
  • sudo cp -r example/solr/* ./
  • sudo cp example/webapps/solr.war ./

The example shipped with Solr uses a single “core” named collection1. Without trying the explain what Solr cores are, think of it as way to host multiple indices within a single Solr instance.

Let’s change collection1’s name to something more friendly, e.g., catalog.

  • cd /opt/solr
  • sudo mv collection1 catalog

But that’s not all. You have to modify the copied example config /opt/solr/solr.xml as well. Simply change every occurrence of “collection1” to “catalog” below the “cores” element. After all, the it should look like this

https://gist.github.com/3972338

Solr needs a directory to store its data. Let’s create the directory and set appropriate rights for Solr to be able to access it.

  • sudo mkdir /opt/solr/data
  • sudo chown tomcat6 /opt/solr/data

Now tell Solr about the data directory by add/edit the “dataDir” element in your core’s main config file, which is in this case /opt/solr/catalog/conf/solrconfig.xml. The element should look like this.

https://gist.github.com/3972377

Last but not least you have to tell tomcat about your new Solr instance. Therefore, create a file named /etc/tomcat6/Catalina/localhost/solr.xml with the following content.

https://gist.github.com/3972383

Restart tomcat and Solr should be waiting for on port 8080.

  • sudo /etc/init.d/tomcat6 restart

Prepare your data for beeing indexed

The following steps assume, that the data which should be indexed is stored as xml files, each file representing a single item.

Now that you have Solr up-and-running, let’s index some data. But beware, Solr expects data to be in a special format. You cannot simply push your own xml files into it without preprocessing them first. So what does Solr expect ? Have a look at the xml files in the exampledocs directory (which is in our case located at /opt/solr/example/exampledocs), for example monitor.xml

  • cat /opt/solr/example/exampledocs/monitor.xml

which will show the following xml file.

https://gist.github.com/3972412

As you can see, you have an “add” element, which has one or many “doc” elements, which have one or many “field” elements with an attribute called “name” and a value. A minimal xml file to be indexed by Solr would look like this.

https://gist.github.com/3972606

The problem is, that your data is probably not in that format, so you have to convert it prior to loading it into Solr. When the input data is xml, one way to accomplish this is by using XSLT. Doing complex transformations using XSLT is a topic of it’s own, so let’s assume the following simple xml input.

https://gist.github.com/3972615

The following XSLT will transform this xml to an xml file which can be loaded into Solr.

https://gist.github.com/3972619

By the way, if you want to hack some xsl, try http://www.xmlper.com an online xsl/xml editor with live preview of your transformed xml.

Ok, now put your xsl file somewhere, where we can use it later on. There is already a directory in our example core (which is named catalog) for xslt files, located at /opt/solr/catalog/conf/xslt, so save the xslt file there and give it an expressive name, like input_to_solr.xsl.

To check if your stylesheet works as expected you can use xsltproc. Install the package and do some transformation on your sample input xml located at ~/input.xml.

  • sudo apt-get install xsltproc
  • xsltproc /xsltproc /opt/solr/catalog/conf/xslt/input_to_solr.xsl ~/input.xml

This should give the following xml file, which accords to what Solr expects.

https://gist.github.com/3972643

Setup the schema

Before trying to index some data, we have to tell Solr which fields we are using. The corresponding schema configuration file is located at /opt/solr/catalog/conf/schema.xml. Remember that we have a core named “catalog” and the schema file is just located in the conf directory below that cor’s root.

Solr already knows certain fields, e.g. id or subject, but some fields are missing in the default schema, like “type” or “format”. But even the fields which the default schema defines may not match our input data, so lets alter the default schema to match our input.

In /opt/solr/catalog/conf/schema.xml, add the following lines inside the “fields” element.

https://gist.github.com/3972656

But we are not finished. The Solr default schema defines a field named “subject”, but does not declare this field to be “multi valued”, as in our input xml. So we need to alter the existing field definition and add the “multiValued” attribute set to “true”. After editing, the line should look like this

https://gist.github.com/3972660

Now that the schema file corresponds our input, restart tomcat to ensure, the new schema is loaded.

  • sudo /etc/init.d/tomcat6 restart

Index some data

Now, that you are able to transform your input to something, Solr understands, you could apply this transformation to all input files and POST them to Solr. Let’s to this for our input.xml.

  • xsltproc /opt/solr/catalog/conf/xslt/input_to_solr.xsl ~/input.xml|curl "http://localhost:8080/solr/update?commit=true" --data-binary @- -H 'Content-type:application/xml'

Let’s look at this command. We transform our ~/input.xml file using xsltproc and the stylesheet located at /opt/solr/catalog/conf/xslt/input_to_solr.xsl and pipe the result to curl.

Curl does a POST (with “Content-Type” header set to “application/xml”) to http://localhost:8080/solr/update with “commit=true” and taking STDIN data from the pipe as –data-binary (“@-” says, take file from STDIN).

The result should be as follows (except QTime, which may be different for you).

https://gist.github.com/3972685

Congratulation, you have indexed your first file. Let’s search it using curl.

  • curl http://localhost:8080/solr/catalog/query?q=status:foo_status

This should give you a nice JSON representation of the input file, like this

https://gist.github.com/3972691

Customize the default query handler

At the moment, you can query items by specifying field:value pairs like “status:foo_status”. But what you probable want is to query for terms in multiple (or all) fields, without naming them. This can be accomplished by setting some smart defaults for a query handler in your core’s Solr config. In this tutorial, we have single core named “catalog”, so the config would be /opt/solr/catalog/conf/solrconfig.xml.

Search for the definition of the “requestHandler” with name=”/query”. This element has a lst child named “defaults”. Here you can define query parameters, which should be assumed if not given in the request.

Let’s combine the ability to define defaults with a different query mode like (e)dismax. Change the “requestHandler” element for /query to look like this.

https://gist.github.com/3972892

Now, if you issue a query, the dismax query mode is used. This mode provides the qf, where you can specify the fields, where Solr should search for the query term. In this example all fields should be searched. With this request handler, you can query items by simply doing

  • curl http://192.168.56.100:8080/solr/catalog/query?q=foo_status

A last note on the qf parameter. You can define “boost” values for each field, which will make some fields more relevant than others when doing a query. Boost values can be written by using ^, e.g. title^2.

From zero to Solr: A hands-on tutorial

Include files from git submodules when building a ruby gem

Today I ran into the following situation. I wanted to build a gem with bundler which has some vendor assets included as git submodules. My directory structure was something like that

.git/
app/*
lib/*
vendor/assets/javascripts/es5-shim (submodule)
vendor/assets/javascripts/pathjs (submodule)

When doing

rake build

the files from the submodules where not included in the gem, because the gemspec specifies the files as follows.

gem.files = `git ls-files`.split($\)

Unfortunately, git ls-files does not list files from submodules and that’s why, these files are not included in the gem.

I solved this by utilizing git submodules’s foreach statement in combination with some ruby string manipulation.

The resulting gemspec looks like this.


# -*- encoding: utf-8 -*-
require File.expand_path('../lib/example/version', __FILE__)
Gem::Specification.new do |gem|
gem.authors = ["John Doe"]
gem.email = ["john_doe@example.org"]
gem.description = %q{Write a gem description}
gem.summary = %q{Write a gem summary}
gem.homepage = ""
gem.files = `git ls-files`.split($\)
gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
gem.test_files = gem.files.grep(%r{^(test|spec|features)/})
gem.name = "example"
gem.require_paths = ["lib"]
gem.version = Example::VERSION
# get an array of submodule dirs by executing 'pwd' inside each submodule
`git submodule –quiet foreach pwd`.split($\).each do |submodule_path|
# for each submodule, change working directory to that submodule
Dir.chdir(submodule_path) do
# issue git ls-files in submodule's directory
submodule_files = `git ls-files`.split($\)
# prepend the submodule path to create absolute file paths
submodule_files_fullpaths = submodule_files.map do |filename|
"#{submodule_path}/#{filename}"
end
# remove leading path parts to get paths relative to the gem's root dir
# (this assumes, that the gemspec resides in the gem's root dir)
submodule_files_paths = submodule_files_fullpaths.map do |filename|
filename.gsub "#{File.dirname(__FILE__)}/", ""
end
# add relative paths to gem.files
gem.files += submodule_files_paths
end
end
end

view raw

example.gemspec

hosted with ❤ by GitHub

Matt Connolly suggested a shorter version of the gemspec. Have a look at his comment.

Include files from git submodules when building a ruby gem

Using underscore.js to wrap node.js child_process.spawn

As I was writing jake files to build my projects, I was faced with the problem of spawning new child processes using node’s child_process.spawn. In order to get the output of these child processes on the console, you have to attach on the child process’s stdout ‘data’ event. So my code looked like this.

var child_process = require('child_process');

var child = child_process.spawn(cmd, params.split(' '), {
 cwd: '.'
});

child.stdout.on('data', function(data) {
  process.stdout.write('' + data);
});

child.stderr.on('data', function(data) {
  process.stderr.write('' + data);
});

That does the job. But you have to write this for every process you want to spawn. Everytime the same .stdout.on(‘data’, function(data) { … } stuff. So I thought this would be a great chance to play with the wrap function of underscore.js, so that stdout and stderr would be written to the console per default. The resulting code looks like this.

var child_process = require('child_process');
var underscore = require('underscore');

child_process.spawn = underscore.wrap(child_process.spawn, function(func) {
  // We have to strip arguments[0] out, because that is the function
  // actually being wrapped. Unfortunately, 'arguments' is no real array,
  // so shift() won't work. That's why we have to use Array.prototype.splice 
  // or loop over the arguments. Of course splice is cleaner. Thx to Ryan
  // McGrath for this optimization.
  var args = Array.prototype.splice.call(arguments, 0, 1);

  // Call the wrapped function with our now cleaned args array
  var childProcess = func.apply(this, args);

  childProcess.stdout.on('data', function(data) {
    process.stdout.write('' + data);
  });

  childProcess.stderr.on('data', function(data) {
    process.stderr.write('' + data);
  });

  return childProcess;
});

…

var child = child_process.spawn(cmd, params.split(' '), {
    cwd: '.'
 });

Now everytime you use child_process.spawn, stdout and stderr are tied to process.stdout and process.stderr automatically.

Using underscore.js to wrap node.js child_process.spawn

Debug Etherpad(-lite) server-side JavaScript

There is a mayor challenge when developing code for etherpad. While it is easy to see what happens on client side using FireBug, there’s nothing comparable for server-side code. That’s a big problem if you want to learn how etherpad works on server-side. Sure, you can read the sources, but without viewing a specific function call in action and examining what variable has what value, this is not very comfortable.

A few days ago, there was a mail on etherpad-dev, telling that there’s a new project growing up named etherpad-lite, which uses node.js instead off AppJet for server-side JavaScript. In addition, they mentioned, that they were able to take 98% of the existing code with just minor adjustments. Now so what, they use node.js, not AppJet, where’s the deal ?

There are options to debug (server-side) node.js code. This fact in conjunction with the statement, that etherpads server-side code used in etherpad-lite is almost the same as in stock Etherpad leads to a simple idea. Debugging etherpad-lite will give you most of the information you need to understand how stock etherpad works on server-side.

The development environment

In this section I will tell you, how to create an appropriate development environment to debug etherpad-lite server-side JavaScript code. The following explanations assume, that you have Ubuntu 10.04 installed. First, you have to install some required software packages.

  • apt-get install libssl-dev*
  • apt-get install g++
  • apt-get install curl
  • apt-get install libsqlite3-dev gzip git-core

Now it’s time to build node.js. The etherpad-lite site states, that the current development snapshot works with node.js 0.4.x, so we will grab exactly this release (and not 0.5.x). In order to build node.js you can create a directory named github in your home directory and clone the 0.4 release of node.js to this location.

  • mkdir ~/github
  • cd ~/github
  • mkdir joyent
  • cd joyent
  • git clone –depth 1 git://github.com/joyent/node.git
  • cd node
  • git checkout origin/v0.4

Node.js has to be built and installed. In order to do so, just create a directory for node.js in your home directory. Afterwads, do configure, make, make install.

  • mkdir ~/local
  • ./configure –prefix=$HOME/local/node
  • make
  • make install

Of course, no program will find node.js in that uncommon location, so you have to add the following lines to your ~/.profile.

  • echo ‘export PATH=$HOME/local/node/bin:$PATH’ >> ~/.profile
  • echo ‘export NODE_PATH=$HOME/local/node:$HOME/local/node/lib/node_modules’ >> ~/.profile
  • source ~/.profile

Now you are able to grab etherpad-lite. Therefor, create a directory below ~/github to store the cloned pad from github.

  • mkdir ~/github/Pita
  • cd ~/github/Pita
  • git clone git://github.com/Pita/etherpad-lite.git
  • cd etherpad-lite

In order to finish the etherpad-lite setup and to install the web-based node.js debugger later on, install the Node Package Manager (npm).

  • curl http://npmjs.org/install.sh | sh
  • cd ~/github/Pita/etherpad-lite
  • npm install
  • bin/run.sh (the debug run script does not fetch jquery at the first start …)
  • Abort etherpad-lite using CTRL-C

It’s time to install the node-inspectore, the web frontend to node.js debugger.

  • npm -g install node-inspector
  • cd ~/github/Pita/etherpad-lite
  • bin/runDebug.sh (or bin/debugRun.sh, depending on your version of etherpad-lite)

The node-inspectore web-based node.js debugger front-end will now accept connections on port 8080, BUT YOU HAVE TO USE GOOGLE CHROME. Other browsers will not work. You can try it, if you want. Now click on “Scripts” and you will see server-side JavaScript code. Using node-inspector is beyond the scope of this post. The following video will give you a short introduction.

http://www.youtube.com/watch?v=AOnK3NVnxL8

Debug Etherpad(-lite) server-side JavaScript

Invoking Hyper-V WMI API methods with reference parameters using WS-Management

Invoking Hyper-V WMI API methods with reference parameters using WS-Management

Hi !

I want to write down my experiences with invoking Hyper-V API methods using WS-Management tools (or wsman for short), such as WinRM, which is part of the Windows Remote Mangement Framework. The goal is, to call the Hyper-V API method GetSummaryInformation for a specific virtual machine and to get only informations, requested during the method call. This post will assume you have Windows Server 2008 R2 with configured and running WinRM. Additionally, WinRM will be used from a second machine as the WS-Management client.

General method invocation using WS-Management

Invocation of methods exposed using WS-Management follows a simple mechanism. The header of the invoke message contains the endpoint reference, were the method has te be invoked, plus the method name. The body contains the parameters for the methods, encoded in a simple [MethodeName]_INPUT block. Here is an example of the header of a method invocation message, with just the relevant header fields.

<s:envelop ...>
 <s:header ...>
 ...
 <ResourceURI s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService</w:ResourceURI>
 <a:Action s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService/GetSummaryInformation</a:Action>
 <SelectorSet>
  <w:Selector Name="CreationClassName">Msvm_VirtualSystemManagementService</w:Selector>
  <w:Selector Name="Name">vmms</w:Selector><w:Selector Name="SystemCreationClassName">Msvm_ComputerSystem</w:Selector>
  <w:Selector Name="SystemName">HYPERV-1</w:Selector>
 </w:SelectorSet>
 ...
</s:header>
<s:body>
 <p:GetSummaryInformation_INPUT xmlns:p="http://schemasmicrosoftcom/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService">
 ...
 </p:GetSummaryInformation_INPUT>
</s:envelop>

Using WinRM, a method invocation is issued by winrm invoke [MethodName] [ResourceURI] -file:[ParameterFile.xml]

We use a xml file for parameter delivery here. because dealing with references introduces large parameter strings, where it is more convenient, to write those things in a file and just use instruct WinRM to use this file for the parameters. Actually, the content of this file is just pasted inside the method invocation message at the right position (inside the body).

Signature of GetSummaryInformation

First we want to look at the signature of GetSummaryInformation. The first input parameter is an array of references of CIM_VirtualSystemSettingData instances, the second input parameter is an array of integers, indicating which informations wanted to be retrieved.

uint32 GetSummaryInformation( [in] CIM_VirtualSystemSettingData REF SettingData[], [in] uint32 RequestedInformation[], [out] Msvm_SummaryInformation SummaryInformation[] ); 

Parameter arrays in WS-Management

First question is, how the encode an array of values. Let’s look at the second parameter array first, because it is more easy. RequestedInformation is an array of uint32. Arrays are encoded by just writing an element multiple times. For example, the have an array RequestedInformation with three elements 1,2 and 4, one would write the following

<p:RequestedInformation>1</p:RequestedInformation>
<p:RequestedInformation>2</p:RequestedInformation>
<p:RequestedInformation>4</p:RequestedInformation>

Easy, huh 🙂

References as parameters

What about that REF parameter array, which indicates, for which virtual machines one would like the get informations. Since the array has to contain references, we need endpoint references (EPRs) pointing to the actual instances of CIM_VirtualSystemSettingData, for which we want to retrieve informations. To get these EPR, enumerate the according instances using WinRM, but with the special command line parameter -ReturnType:EPR

There are two additional parameters of interest. First, the -Shallow parameter permits the listing of instances of child classes. This is a good idea, because we are only interested in instances of exactly this class. Second, since we want to use the given informations to write our parameter xml file, the output of a xml file would be nice, so that we simply can copy and paste the output to our own xml file later on. This can be achieved by -format:pretty.

Here is the command line for enumerating all instances of Msvm_VirtualSystemSettingData (which is a subclass of CIM_VirtualSystemSettingData and in fact what we want), and only instances and generating a nice xml output.

winrm enumerate http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData -Shallow -ReturnType:EPR -format:pretty

The output should look similar to this

<a:EndpointReference xml:lang="en-US" xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
 <a:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
 <a:ReferenceParameters>
  <w:ResourceURI>http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI >
  <w:SelectorSet>
   <w:Selector Name="InstanceID">Microsoft:4DA77B7B-7F11-4735-A18F-46B57D2438C7</w:Selector>
  </w:SelectorSet>
 </a:ReferenceParameters>
</a:EndpointReference>

<a:EndpointReference xml:lang="en-US" xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
 <a:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
 <a:ReferenceParameters>
  <w:ResourceURI>http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
  <w:SelectorSet>
   <w:Selector Name="InstanceID">Microsoft:9221DB03-BC5F-4485-8BDF-0206C093AC58</w:Selector>
  </w:SelectorSet>
 </a:ReferenceParameters>
</a:EndpointReference>

Now, the final question is, how to encode these EPRs into the method parameters. According to the WS-CIM Mapping Specification Version 1.0.1 (DSP0230) chapter 8.2 CIM References

[…] the xs:any element [which is in fact our reference parameter] shall be replaced by the required wsa:EndpointReference child elements defined by Addressing recommendations, as if the property element were of type wsa:EndpointReferenceType. […]

How the EndpointReferenceType actually looks like can be seen in the output of the WinRM output above, which gives the EPRs for the Msvm_VirtualSystemSettingData instances. The DSP0230 stated, the parameter of a method, if it is a reference, should behave as if it is from Type EndpointReferenceType, in fact by having elements as this type has. So our reference parameter would look like this.

<p:SettingData>
 <a:Address xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
 <a:ReferenceParameters xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
  <w:ResourceURI>http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
  <w:SelectorSet>
   <w:Selector Name="InstanceID">Microsoft:4DA77B7B-7F11-4735-A18F-46B57D2438C7</w:Selector>
  </w:SelectorSet>
 </a:ReferenceParameters>
</p:SettingData>

To get an array of this, just write them successive as mentioned before.

Build the parameter input file

Let’s put everything together and build the parameter file to call GetSummaryInformation for two virtual machines. We want to get specific informations from two virtual machines running on a Windows Server 2008 R2 host called HYPERV-1. We got the EPRs for the Msvm_VirtualSystemSettingData instances using WinRM enumerate (look at the section References as parameters). We want to get the following informations from that virtual machines

  • ElementName
  • NumberOfProcessors
  • EnabledState
  • Uptime

These correspond to the uint32 values 1,4,101 and 105. So our parameter input xml file looks as follows.

<p:GetSummaryInformation_INPUT xmlns:p="http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService">

 <p:SettingData>
  <a:Address xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
  <a:ReferenceParameters xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
   <w:ResourceURI>http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
   <w:SelectorSet>
    <w:Selector Name="InstanceID">Microsoft:4DA77B7B-7F11-4735-A18F-46B57D2438C7</w:Selector>
   </w:SelectorSet>
  </a:ReferenceParameters>
 </p:SettingData>

 <p:SettingData>
  <a:Address xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
  <a:ReferenceParameters xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:w="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd">
   <w:ResourceURI>http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
   <w:SelectorSet>
    <w:Selector Name="InstanceID">Microsoft:9221DB03-BC5F-4485-8BDF-0206C093AC58</w:Selector>
   </w:SelectorSet>
  </a:ReferenceParameters>
 </p:SettingData>

 <p:RequestedInformation>1</p:RequestedInformation>
 <p:RequestedInformation>4</p:RequestedInformation>
 <p:RequestedInformation>101</p:RequestedInformation>
 <p:RequestedInformation>105</p:RequestedInformation>

</p:GetSummaryInformation_INPUT>

Let’s get the job done

First we need an instance of the Msvm_VirtualSystemManagementService on which to invoke the method on. To get this, just enumerate all instances (there should only be one) and grab the EPR. If you don’t know how to get the EPR, please look at the section References as parameters, or for the lazy ones, here is the appropriate WinRM command line

winrm enumerate http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService -ReturnType:EPR

This should give something like this.

EndpointReference
 Address = http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous
 ReferenceParameters
  ResourceURI = http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService
  SelectorSet
   Selector: CreationClassName = Msvm_VirtualSystemManagementService, Name = vmms, SystemCreationClassName = Msvm_ComputerSystem, SystemName = HYPERV-1

If you compare this output to that of -format:pretty, we get plain text, instead of xml. Due to the fact, that we have to build the resource URI in a way, xml would not help, it is irrelevant at this point, how the output is formated.

Now we need to build the mentioned resource URI, which is in fact just the EPR of the instance of the Msvm_VirtualSystemManagementService class in another notation, which corresponds to parameter encoding in URIs, but with a “+” as a seperator instead of a “&” as known from parameters for scripts on websites. So, the complete EPR encoded as an URI is

http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService?CreationClassName=Msvm_VirtualSystemManagementService+Name=vmms+SystemCreationClassName=Msvm_ComputerSystem+SystemName=HYPERV-1

The complete command line to invoke GetSummaryInformations (from a different host, by using basic authentication, which has to be enabled first) is

winrm invoke GetSummaryInformation http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService?CreationClassName=Msvm_VirtualSystemManagementService+Name=vmms+SystemCreationClassName=Msvm_ComputerSystem+SystemName=HYPERV-1 -file:input.xml -r:ip_or_hostname:port -a:Basic -u:Administrator -p:your_password

This should lead to an output similar to this

GetSummaryInformation_OUTPUT
 SummaryInformation
  CreationTime = null
  ElementName = Fedora11-1
  EnabledState = null
  GuestOperatingSystem = null
  HealthState = null
  Heartbeat = null
  MemoryUsage = null
  Name = null
  Notes = null
  NumberOfProcessors = 1
  ProcessorLoad = 0
  UpTime = 45639247
 SummaryInformation
  CreationTime = null
  ElementName = Debian-1
  EnabledState = null
  GuestOperatingSystem = null
  HealthState = null
  Heartbeat = null
  MemoryUsage = null
  Name = null
  Notes = null
  NumberOfProcessors = 1
  ProcessorLoad = null
  UpTime = 0
  ReturnValue = 0

Appendix A — The complete invocation message for GetSummaryInformation

<s:Envelope xmlns:s="http://wwww3org/2003/05/soap-envelope"
            xmlns:a="http://schemas.xmlsoaporg/ws/2004/08/addressing"
            xmlns:w="http://schemas.dmtforg/wbem/wsman/1/wsmanxsd"
            xmlns:p="http://schemas.microsoftcom/wbem/wsman/1/wsmanxsd">
 <s:Header>
  <a:To>http://192.168.1.7:5985/wsman</a:To>
  <w:ResourceURI s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService</w:ResourceURI>
  <a:ReplyTo>
   <a:Address s:mustUnderstand="true">http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>
  </a:ReplyTo>
  <a:Action s:mustUnderstand="true">http://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService/GetSummaryInformation</a:Action>
  <w:MaxEnvelopeSize s:mustUnderstand="true">153600</w:MaxEnvelopeSize>
  <a:MessageID>uuid:C574798F-0160-4956-B00C-85EED13CF0B7</a:MessageID>
  <w:Locale xml:lang="en-US" s:mustUnderstand="false" />
  <p:DataLocale xml:lang="en-US" s:mustUnderstand="false" />
  <w:SelectorSet>
   <w:Selector Name="CreationClassName">Msvm_VirtualSystemManagementService</w:Selector>
   <w:Selector Name="Name">vmms</w:Selector><w:Selector Name="SystemCreationClassName">Msvm_ComputerSystem</w:Selector>
   <w:Selector Name="SystemName">HYPERV-1</w:Selector>
  </w:SelectorSet>
  <w:OperationTimeout>PT60000S</w:OperationTimeout>
 </s:Header>
 <s:Body>
  <p:GetSummaryInformation_INPUT xmlns:p="http://schemasmicrosoftcom/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemManagementService">
   <p:SettingData xmlns:a="http://schemasxmlsoaporg/ws/2004/08/addressing" xmlns:w="http://schemasdmtforg/wbem/wsman/1/wsmanxsd">
    <a:Address>http://schemasxmlsoaporg/ws/2004/08/addressing/role/anonymous</a:Address>
    <a:ReferenceParameters>
     <w:ResourceURI>http://schemasmicrosoftcom/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
     <w:SelectorSet>
      <w:Selector Name="InstanceID">Microsoft:4DA77B7B-7F11-4735-A18F-46B57D2438C7</w:Selector>
     </w:SelectorSet>
    </a:ReferenceParameters>
   </p:SettingData>
   <p:SettingData xmlns:a="http://schemasxmlsoaporg/ws/2004/08/addressing" xmlns:w="http://schemasdmtforg/wbem/wsman/1/wsmanxsd">
    <a:Address>http://schemasxmlsoaporg/ws/2004/08/addressing/role/anonymous</a:Address>
    <a:ReferenceParameters>
     <w:ResourceURI>http://schemasmicrosoftcom/wbem/wsman/1/wmi/root/virtualization/Msvm_VirtualSystemSettingData</w:ResourceURI>
     <w:SelectorSet>
      <w:Selector Name="InstanceID">Microsoft:9221DB03-BC5F-4485-8BDF-0206C093AC58</w:Selector>
     </w:SelectorSet>
    </a:ReferenceParameters>
   </p:SettingData>
   <p:RequestedInformation>1</p:RequestedInformation>
   <p:RequestedInformation>4</p:RequestedInformation>
   <p:RequestedInformation>101</p:RequestedInformation>
   <p:RequestedInformation>105</p:RequestedInformation>
  </p:GetSummaryInformation_INPUT>
 </s:Body>
</s:Envelope>

Links

[1] MSDN reference for GetSummaryInformation — http://msdn.microsoft.com/en-us/library/cc160706%28VS.85%29.aspx

[2] WS-CIM Mapping Specification (DSP0230) — http://www.dmtf.org/standards/wsman

Invoking Hyper-V WMI API methods with reference parameters using WS-Management