vmx

the blllog.

When npm link fails

2019-08-01 22:35

There are cases where linking local packages don't produce the same result as if you would've installed all packages from the registry. Here I'd like to tell the story about one of those real world cases and conclude with a solution to those problems.

The problem

When you do an npm install heavy module deduplication and hoisting, which doesn't always behave the same way in all cases. For example if you npm link a package, the resulting node_modules tree is different. This may lead to unexpected runtime errors.

It happened to me recently and I thought I use exactly this real world example to illustrate that problem and a possible solution to it.

Real world example

Preparations

Start with cloning the js-ipfs-mfs and js-ipfs-unixfs-importer repository:

$ git clone https://github.com/ipfs/js-ipfs-mfs --branch v0.12.0 --depth 1
$ git clone https://github.com/ipfs/js-ipfs-unixfs-importer --branch v0.39.11 --depth 1

Our main module is js-ipfs-mfs and let's say you want to make local changes to js-ipfs-unix-importer, which is a direct dependency of js-ipfs-mfs.

First of all you of course make sure that currently the tests pass (we just run a subset, to get to the actual issue faster). I'm sorry that the installation takes so long and so much space, the dev dependencies are quite heavy.

$ cd js-ipfs-mfs
$ npm install
$ npx mocha test/write.spec.js
…
  53 passing (4s)
  1 pending

Ok, all tests passed.

Reproducing the issue

Before we even start modifying js-ipfs-unix-importer, we link it and check that the tests still pass.

$ cd js-ipfs-unixfs-importer
$ npm link
$ cd ../js-ipfs-mfs
$ npm link ipfs-unixfs-importer
$ npx mocha test/write.spec.js
…
  37 passing (2s)
  1 pending
  16 failing
…

Oh, no. The tests failed. But why? The reason is deep down in the code. The root cause is in the [hamt-sharding] module and it's not even a bug. It just checks if something is a Bucket:

static isBucket (o) {
  return o instanceof Bucket
}

instanceof only works if both instances we check on came from the exact same module. Let's see who is importing the hamt-sharding module:

$ npm ls hamt-sharding
ipfs-mfs@0.12.0 /home/vmx/misc/protocollabs/blog/when-npm-link-fails/js-ipfs-mfs
├── hamt-sharding@0.0.2
├─┬ ipfs-unixfs-exporter@0.37.7
│ └── hamt-sharding@0.0.2  deduped
└─┬ UNMET DEPENDENCY ipfs-unixfs-importer@0.39.11
  └── hamt-sharding@0.0.2  deduped

npm ERR! missing: ipfs-unixfs-importer@0.39.11, required by ipfs-mfs@0.12.0

Here we see that ipfs-mfs has a direct dependency on it, and an indirect dependency through ipfs-unixfs-exporter and ipfs-unixfs-importer. All of them use the same version (0.0.2), hence it's deduped and the instanceof call should work. But there's also an error about an UNMET DEPENDENCY, the ipfs-unixfs-importer module we linked to.

To make it clear what's happening inside Node.js. When you require('hamt-sharding') from the ipfs-mfs code base, it will load the module from the physical location js-ipfs-mfs/node_modules/hamt-sharding. When you require it from ipfs-unixfs-importer it will be loaded from js-ipfs-mfs/node_modules/ipfs-unixfs-importer/node_modules/hamt-sharding resp. from ipfs-unixfs-importer/node_modules/hamt-sharding, as js-ipfs-mfs/node_modules/ipfs-unixfs-importer is just a symlink to a symlink to that directory.

When you do a normal installation without linking, you won't have this issue as hamt-sharding will be properly deduplicated and only loaded once from js-ipfs-mfs/node_modules/hamt-sharding.

Possible workarounds that do not work

Though you still like to change ipfs-unixfs-importer locally and test those changes with ipfs-mfs without breaking anything. I had several ideas on how to workaround this. I start with the ones that didn't work:

  1. Just delete the js-ipfs-unixfs-importer/node_modules/hamt-sharding directory. The module should still be found in the resolve paths of ipfs-mfs. No it doesn't. Tests fail because hamt-sharding can't be found.
  2. Global linking runs an npm install when you run the initial npm link. What if we remove the js-ipfs-unixfs-importer/node_modules completely and symlink to the module manually. That also doesn't work, the hamt-sharding module also can't be found.
  3. Install ipfs-unixfs-importer directly with a relative path (npm install ../js-ipfs-unixfs-importer). No, that doesn't work either, it will still have its own node_modules/hamt-sharding, it won't be properly deduplicated.

There must be a way to make local changes to a module and testing them without publishing it each time. Luckily there really is.

Working workaround

I'd like to thank my colleague Hugo Dias for this workaround that he has been using for a while already.

You can just replicate what a normal npm install <package> would be doing. You pack the module and then install that packed package. In our case that means:

$ cd js-ipfs-mfs
$ npm pack ../js-ipfs-unixfs-importer
…
ipfs-unixfs-importer-0.39.11.tgz
$ npm install ipfs-unixfs-importer-0.39.11.tgz
…
+ ipfs-unixfs-importer@0.39.11
added 59 packages from 76 contributors and updated 1 package in 31.698

Now all tests pass.

This is quite a manual process. Luckily Hugo created a module to automate exactly that workflow. It's called connect-deps.

Conclusion

Sometimes linking packages doesn't create the same structure of modules and you need to use packing instead. To automate this you can use connect-deps.

Categories: en, JavaScript, npm

Joining Protocol Labs

2018-01-24 22:35

I’m pumped to announce that I’m joining Protocol Labs as a software engineer. Those following me on Twitter or looking on my GiHub activity might have already got some hints.

Short term

My main focus is currently on IPLD (InterPlanetary Linked Data). I’ll smooth things out and also work on the IPLD specs, mostly on IPLD Selectors. Those IPLD Selectors will be used to make the underlying graph more efficient to traverse (especially for IPFS). That’s a lot of buzzwords, I hope it will get clearer the more I’ll blog about this.

To get started I’ve done the JavaScript IPLD implementations for Bitcoin and Zcash. Those are the basis to make easy traversal through the Bitcoin and Zcash blockchains possible.

Longer term

In the longer term I’ll be responsible to bring IPLD to Rust. That’s especially exciting with Rust’s new WebAssembly backend. You’ll get a high performance Rust implementation, but also one that works in Browsers.

What about Noise?

Many of you probably know that I’ve been working full-time on Noise for the past 1.5 years. It shapes up nicely and is already quite usable. Of course I don’t want to see this project vanish and it won’t. At the moment I only work part-time at Protocol Labs, to also have some time for Noise. In addition to that there’s also interest within Protocol Labs to use Noise (or parts of it) for better query capabilities. So far it’s only rough ideas I mentioned briefly at the end of my talk about Noise at the [Lisbon IPFS Meetup] two weeks ago. But what’s the distributed web without search?

What about geo?

I’m also part of the OSGeo community and FOSS4G movement. So what’s the future there? I see a lot of potential in the Sneakernet. If geo-processing workflows are based around IPFS, you could use the same tools/scripts whether it is stored somewhere in the cloud, or access you local mirror/dump if your Internet connection isn’t that fast/reliable.

I expect non-realiable connectivity to be a hot topic at the FOSS4G 2018 conference in Dar es Salaam, Tansania.

Conclusion

I’m super excited. It’s a great team and I’m looking forward to push the distributed web a bit forward.

Categories: en, ProtocolLabs, IPLD, IPFS, JavaScript, Rust, geo

Introduction to Noise’s Node.js API

2017-12-21 22:35

In the previous blog post about Noise we imported data with the help of some already prepared scripts. This time it’s an introduction in how to use Noise‘s Promise-based Node.js API directly yourself.

The dataset we use is not a ready to use single file, but one that consists of several ones. The data is the “Realized Cost Savings and Avoidance” for US government agencies. I’m really excited that such data gets openly published as JSON. I wished Germany would be that advanced in this regard. If you want to know more about the structure of the data, there’s documentation about the [JSON Schmema], they even have a “OFCIO JSON User Guide for Realized Cost Savings” on how to produce the data out of Excel.

I’ve prepared a repository containing the final code and the data. But feel free to follow along this tutorial by yourself and just point to the data directory of that repository when running the script.

Let’s start with the boilerplate code for reading in those files and parsing them as JSON. But first create a new package:

mkdir noise-cost-savings
cd noise-cost-savings
npm init --force

You can use --force here as you probably won’t publish this package anyway. Put the boilerplate code below into a file called index.js. Please note that the code is kept as simple as possible, for a real world application you surely want better error handling.

#!/usr/bin/env node
'use strict';

const fs = require('fs');
const path = require('path');

// The only command line argument is the directory where the data files are
const inputDir = process.argv[2];
console.log(`Loading data from ${inputDir}`);

fs.readdir(inputDir, (_err, files) => {
  files.forEach(file => {
    fs.readFile(path.join(inputDir, file), (_err, data) => {
      console.log(file);
      const json = JSON.parse(data);
      processFile(json);
    });
  });
});

const processFile = (data) => {
  // This is where our actual code goes
};

This code should already run. Checkout my repository with the data into some directory first:

git clone https://github.com/vmx/blog-introduction-to-noises-nodejs-api

Now run the script from above as:

node index.js <path-to-directory-from-my–repo-mentioned-above>/data

Before we take a closer look at the data, let’s install the Noise module. Please note that you need to have Rust installed (easiest is probably through rustup) before you can install Noise.

npm install noise-search

This will take a while. So let’s get back to code. Load the noise-search module by adding:

const noise = require('noise-search');

A Noise index needs to be opened and closed properly, else your script will hang and not terminate. Opening a new Noise index is easy. Just put this before reading the files:

const index = noise.open('costsavings', true);

It means that open an index called costsavings and create it if it doesn’t exist yet (that’s the boolean true). Closing the index is more difficult due to the asynchronous nature of the code. We can close the index only after all the processing is done. Hence we wrap the fs.readFile(…) call in a Promise. So that new code looks like this:

fs.readdir(inputDir, (_err, files) => {
  const promises = files.map(file => {
    return new Promise((resolve, reject) => {
      fs.readFile(path.join(inputDir, file), (err, data) => {
        if (err) {
          reject(err);
          throw err;
        }

        console.log(file);
        const json = JSON.parse(data);
        resolve(processFile(json));
      });
    });
  });
  Promise.all(promises).then(() => {
    console.log("Done.");
    index.close();
  });
});

If you run the script now it should print out the file names as before and terminate with a Done.. There got a directory called costsavings created after you ran the script. This is where the Noise index is stored in.

Now let’s have a look at the data files, e.g. the cost savings file from the Department of Commerce (or the JSON Schema), you’ll see that it has a single field called "strategies", which contains an array with all strategies. We are free to pre-process the data as much as we want before we insert it into Noise. So let’s create a separate document for every strategy. Our processFile() function now looks like:

const processFile = (data) => {
  data.strategies.forEach(async strategy => {
    // Use auto-generated Ids for the documents
    await index.add(strategy);
  });
};

Now all the strategies get inserted. Make sure you delete the index (the costsavings directory) if you re-run the scripts, else you would end up with duplicated entries, as different Ids will be generated on every run.

To query the index you could use the Noise indexserve script that I’ve also used in the last blog post about Noise. Or we just add a small query at the end of the script after the loading is done. Our query function will do the query and output the result:

const queryNoise = async (query) => {
  const results = await index.query(query);
  for (const result of results) {
    console.log(result);
  }
};

There’s not much to say, except it’s again a Promised-based API. And now hook up this function after the loading and before the index is closed. For that, replace the Promise.all(…) call with:

Promise.all(promises).then(async () => {
  await queryNoise('find {} return count()');
  console.log("Done.");
  index.close();
});

It’s a really simple query, it just returns the number of documents that are in there (644). After all this hard work, it’s time to make a more complicated query on this dataset to show that it was worth doing all this. Let’s return the total net savings of all agencies in 2017. Replace the query find {} return count() with:

find {fy2017: {netOrGross: == "Net"}} return sum(.fy2017.amount)

That’s $845m savings. Not bad at all!

You can learn more about the Noise Node.js API from the README at the corresponding repository. If you want to learn more about possible queries, have a look at the Noise Query Language reference.

Happy cost saving!

Categories: en, Noise, Node, JavaScript, Rust

Exploring data with Noise

2017-12-12 22:35

This is a quick introduction on how to explore some JSON data with Noise. We won’t do any pre-processing, but just load the data into Noise and see what we can do with it. Sometimes the JSON you get needs some tweaking before further analysis makes sense. For example you want to rename fields or numbers are stored as string. This exploration phase can be used to get a feeling for the data and which parts might need some adjustments.

Finding decent ready to use data that contains some nicely structured JSON was harder than I thought. Most datasets are either GeoJSON or CSV masqueraded as JSON. But I was lucky and found a JSON dump of the CVE database provided by CIRCL. So we’ll dig into the CVEs (Common Vulnerabilities and Exposures) database to find out more about all those security vulnerabilities.

Noise has a Node.js binding to get started easily. I won’t dig into the API for now. Instead I’ve prepared two scripts. One to load the data from a file containing new line separated JSON. And another one for serving up the Noise index over HTTP, so that we can explore the data via curl.

Prerequisites

As we use the Node.js binding for Noise, you need to have Node.js, npm and Rust (easiest is probably through rustup) installed.

I’ve created a repository with the two scripts mentioned above plus a subset of the CIRCL CVE dataset. Feel free to download the full dataset from the CIRCL Open Data page (1.2G unpacked) and load it into Noise. Please note that Noise isn’t performance optimised at all yet. So the import takes some time as the hard work of all the indexing is done on insertion time.

git clone https://github.com/vmx/blog-exploring-data-with-noise
cd blog-exploring-data-with-noise
npm install

Now everything we need should be installed, let’s load the data into Noise and do a query to verify it’s installed properly.

Loading the data and verify installation

Loading the data is as easy as:

npx dataload circl-cve.json

For every inserted record one dot will be printed.

To spin up the simple HTTP server, just run:

npx indexserve circl-cve

To verify it does actually respond to queries, try:

curl -X POST http://127.0.0.1:3000/query -d 'find {} return count()'

If all documents got inserted correctly it should return

[
1000
]

Everything is set up properly, now it’s time to actually exploring the data.

Exploring the data

We don’t have a clue yet, what the data looks like. So let’s start with looking at a single document:

curl -X POST http://127.0.0.1:3000/query -d 'find {} return . limit 1'
[
{
  "Modified": "2017-01-02 17:59:00.147000",
  "Published": "2017-01-02 17:59:00.133000",
  "_id": "34de83b0d3c547c089635c3a8b4960f2",
  "cvss": null,
  "cwe": "Unknown",
  "id": "CVE-2017-5005",
  "last-modified": {
    "$date": 1483379940147
  },
  "references": [
    "https://github.com/payatu/QuickHeal",
    "https://www.youtube.com/watch?v=h9LOsv4XE00"
  ],
  "summary": "Stack-based buffer overflow in Quick Heal Internet Security 10.1.0.316 and earlier, Total Security 10.1.0.316 and earlier, and AntiVirus Pro 10.1.0.316 and earlier on OS X allows remote attackers to execute arbitrary code via a crafted LC_UNIXTHREAD.cmdsize field in a Mach-O file that is mishandled during a Security Scan (aka Custom Scan) operation.",
  "vulnerable_configuration": [],
  "vulnerable_configuration_cpe_2_2": []
}
]

The query above means: “Find all documents without restrictions and return it’s full contents. Limit it to a single result”.

You don’t always want to return all documents, but filter based on certain conditions. Let’s start with the word match operator ~=. It matches document which contains those words in a specific field, in our case "summary". As “buffer overflow” is a common attack vector, let’s search for all documents that contain it in the summary.

curl -X POST http://127.0.0.1:3000/query -d 'find {summary: ~= "buffer overflow"}'
[
"34de83b0d3c547c089635c3a8b4960f2",
"8dff5ea0e5594e498112abf1c222d653",
"741cfaa4b7ae43909d1da153747975c9",
…
"b7419042c9464a7b96d3df74451cb4a7",
"d379e9fda704446982cee8638f32e72b"
]

That’s quite a long list of random characters. Noise assigns Ids to every inserted document if the document doesn’t contain a "_id" field. By default Noise returns such Ids of the matching documents. So no return value is equivalent to return ._id. Let’s return the CVE number of the matching vulnerabilities instead. That field is called "id":

curl -X POST http://127.0.0.1:3000/query -d 'find {summary: ~= "buffer overflow"} return .id'
[
"CVE-2017-5005",
"CVE-2016-9942",
…
"CVE-2015-2710",
"CVE-2015-2666"
]

If you want to know how many there are, just append a return count() to the query:

curl -X POST http://127.0.0.1:3000/query -d 'find {summary: ~= "buffer overflow"} return count()'
[
61
]

Or we can of course return the full documents to see if there are further interesting things to look at:

curl -X POST http://127.0.0.1:3000/query -d 'find {summary: ~= "buffer overflow"} return .'
…

I won’t post the output here, it’s way too much. If you scroll through the output, you’ll see that some contain a field named "capec", which is probably about the Common Attack Pattern Enumeration and Classification. Let’s have a closer look at one of those, e.g. from “CVE-2015-8388”:

curl -X POST http://127.0.0.1:3000/query -d 'find {id: == "CVE-2015-8388"} return .capec'
[
[
  {
    "id": "15",
    "name": "Command Delimiters",
    "prerequisites": …
    "related_weakness": [
      "146",
      "77",
      …
    ],
    "solutions": …
    "summary": …
  },
  …

This time we’ve used the exact match operator ==. As the CVEs have a unique Id, it only returned a single document. It’s again a lot of data, we might only care about the CAPEC names, so let’s return those:

curl -X POST http://127.0.0.1:3000/query -d 'find {id: == "CVE-2015-8388"} return .capec[].name'
[
[
  "Command Delimiters",
  "Flash Parameter Injection",
  "Argument Injection",
  "Using Slashes in Alternate Encoding"
]
]

Note that it is an array of an array. The reason is that in this case we only return the CAPEC names of a single document, but our filter condition could of course match more documents, like the word match operator did when we were searching for “buffer overlow”.

Let’s find out all CVEs where the CAPEC name “Directory Traversal”.

curl -X POST http://127.0.0.1:3000/query -d 'find {capec: [{name: == "Command Delimiters"}]} return .id'
[
"CVE-2015-8389",
"CVE-2015-8388",
"CVE-2015-4244",
"CVE-2015-4224",
"CVE-2015-2265",
"CVE-2015-1986",
"CVE-2015-1949",
"CVE-2015-1938"
]

The CAPEC data also contains references to related weaknesses as we’ve seen before. Let’s return the related_weakness of all CVEs that have the CAPEC name “Command Delimiters”.

curl -X POST http://127.0.0.1:3000/query -d 'find {capec: [{name: == "Command Delimiters"}]} return {cve: .id, related: .capec[].related_weakness}'
[
{
  "cve": "CVE-2015-8389",
  "related": [
    [
      "146",
      "77",
      …
    ],
    [
      "184",
      "185",
      "697"
    ],
    …
  ]
},
{
  "cve": "CVE-2015-8388",
  "related": [
  …
  ]
},
…
]

That’s not really what we were after. This returns the related weaknesses of all CAPECs and not just the one named “Command Delimiters”. The solution is a so called bind variable. You can store an array element that matches a condition in a variable which can then be re-used in the return value.

Jut prefix the array condition with a variable name separated by two colons:

find {capec: commdelim::[{name: == "Command Delimiters"}]}

And use it in the return value like any other path:

return {cve: .id, related: commdelim.related_weakness}

So the full query is:

curl -X POST http://127.0.0.1:3000/query -d 'find {capec: commdelim::[{name: == "Command Delimiters"}]} return {cve: .id, related: commdelim.related_weakness}'
[
{
  "cve": "CVE-2015-8389",
  "related": [
    [
      "146",
      "77",
      …
    ]
  ]
},
{
  "cve": "CVE-2015-8388",
  "related": [
    [
      "146",
      "77",
      …
    ]
  ]
},
…
]

The result isn’t that exciting as it’s the same related weaknesses for all CVEs, but of course the could be completely arbitrary. There’s no limitation on the schema.

So far we haven’t done any range requests yet. So let’s have a look at all CVEs that were last modified on December 28th with “High” severity rating according to the Common Vulnerability Scoring System. First we need to determine the correct timestamps:

date --utc --date="2016-12-28" "+%s"
1482883200
date --utc --date="2016-12-29" "+%s"
1482969600

Please note that the "last-modified" field has timestamps with 13 characters (ours have 10), which means that they are in milliseconds, so we just append three zeros and we’re good. The severity rating is stored in the field "cvss”, “High” severity means a value from 7.0–8.9. We need to put the field name last-modified in quotes as it contains a dash (just as you’d do it in JavaScript). The final query is:

curl -X POST http://127.0.0.1:3000/query -d 'find {"last-modified": {$date: >= 1482883200000, $date: < 1482969600000}, cvss: >= 7.0, cvss: <=8.9} return .id'
[
"CVE-2015-4199",
"CVE-2015-4200",
"CVE-2015-4224",
"CVE-2015-4227",
"CVE-2015-4230",
"CVE-2015-4234",
"CVE-2015-4208",
"CVE-2015-4526"
]

This was an introduction into basic querying of Noise. If you want to know about further capabilities you can have a look at the Noise Query Language reference or stay tuned for further blog posts.

Happy exploration!

Categories: en, Noise, Node, JavaScript, Rust

LXJS 2013

2013-10-06 22:35

The LXJS conference was a blast like last year. Well organized, great speakers, nice parties and an overwhelming overall atmosphere. It's definitely a conference that is in my regular schedule.

The talks

It was a pleasure to see such a variety of different talk styles. Whenever you get invited/accepted to give a talk at the LXJS, be sure your presentation style is outstanding. My two favourite ones were Michal Budzynski – Firefox OS army! and Jonathan Lipps – mobile automation made awesome. Playing games on stage or singing songs is something you won't see at many other conferences.

Another presentation I really enjoyed was one about designing for accessibility. Laura Kalbag did really get the message across and showed great examples.

Interesting was also the talk Digital Feudalism & How to Avoid It. It was about user experience and touched a lot of topics, from business models over privacy to the problems of open source. I really like the whole presentation, from the contents to the presentation style. But sadly only up to shortly before the end of the talk. Aral Balkan closed with his new startup that creates a new phone with overall great experience. As far as I know there's no information available on what Codename Prometheus will be based on. If it's based on Firefox OS I can see the point, if it's something custom I see doomed to fail.

A really enjoyable talk came from Vyacheslav Egorov. It was about microbenchmarking pitfalls and had great depth, while being super entertaining.

The people

I've met a lot new ones and plenty of people I already know. It was a good mixture with many great conversations. There's not really a point mentioning all of them, you know who you are.

On the boat trip I learned that Mountain View (that link is funnier than I've thought, given that's a blog about a JavaScript conference) not one of the most boring places but actually has something to over if you life there (recommend for young singles).

The conference itself

The conference was very well organized. Thanks David Dias and all the others that organized that event (here should be a link, but I couldn't found one to the organizers). Having a cinema as a venue is always nice. Comfortable seats and a big canvas for presentation.

Live streaming of the talks and having them available immediately afterwards on YouTube is really really nice. So even if you can't attend you still get all the great talks if you want to.

The only critique I have is the lunch. Those baguettes were OK and I didn't leave hungry, but the food last time was just so much better.

Conclusion

The LXJS 2013 was a great and I'm looking forward to see everyone again at this well organized conference next year!

Categories: en, JavaScript, conference

LXJS 2012

2012-10-01 22:35

The LXJS conference was really a blast. Well organized, great speakers, nice parties and an overwhelming overall atmosphere. My talk about bidirectional transformations also went well.

My talk

With my talk "Bidirectional transformations with lenses", it's been the first time I've talked about something not directly geo-related at a conference, though I couldn't leave out some references to the geo world. The whole topic would deserves a blog post on its own, hence I'll just leave a reference to the slides of my talk, the recording from LXJS and the Github repository of jslens.

The others talks

Most talks were of high quality and it was a great to learn about new things. Highlights for me were the talks about Fireworks (where there doesn't seem to be a recording of), the one about Helicopters, the one about how to manage open source projects properly and Jan's talk about Javascript's world domination that made me think.

All presentations were recorded, so you can watch them now to find out what you've missed out.

Format of the conference

It was the first single-track conference I've been to and I really liked it. Everyone got to see the same presentations and you don't feel like you've missed something. As a speaker you have the advantage of not having some well known person at the same time which draws away the attendees from your talk. Everything is focused around a single stage where everyone is excited about what is next.

The talks where always grouped into certain categories, that made a lot of sense. Though it was a bit strange to hear about new JavaScript based languages in two different slots.

The events around the conference

The conference had a pre, middle and after party. It was really good to get in touch with people there. I also liked the idea to not making a difference between the speakers and the attendees with a speakers dinner or something similar. For the after-after party a huge group of people just kept on having fun. The people didn't split as much as I would've expected it. This speaks for the great atmosphere and the nice group of attendees.

Conclusion

I really had a great time and it was fun to meet so many old friends from the CouchOne days, but also to meet a lot of interesting new people. I'm really looking forward to the 2013 edition of the LXJS.

Categories: en, JavaScript, conference

FOSS4G 2011: Report

2011-09-20 22:35

The FOSS4G 2011 is over now. Time for a small report. The crowd was amazing and it was again the ultimate gathering of the Free and Open Source for Geospatial developer tribe. Solid presentations and great evenings.

My talk: The State of GeoCouch

I'm really happy how my talk went, I really enjoyed it. The were lots of people (although there was a talk from Frank Warmerdam at the same time) asking interesting questions at the end.

The talk is not only about GeoCouch but also gives you an overview of some of the features it leverages from Apache CouchDB. In the end you should have an overview why you might want to use GeoCouch for your next project.

You can get the slides right here.

Other talks

I was happy to see that there was another talk about GeoCouch. Other talks I really enjoyed were:

And of course there were also great talks from in the plenary sessions from Paul Ramsey about Why do you do that? An exploration of open source business models and Schuyler Erle's so funny lightning talk about Pivoting to Monetize Mobile Hyperlocal Social Gamification by Going Viral

Code Sprint

At the code sprint I was working on MapQuery together with Steven Ottens and Justin Penka. Steven was working on TMS support, Justin on a 6 minutes tutorial and I on making manual adding of features possible.

The OpenLayers developers did the migration from Subversion to Git for their development. OpenLayers is now available on Github.

And luckily there was a fire alarm in between to take a group photograph.

Future of the FOSS4G

I really hope there won't be a yearly FOSS4G conference for the whole of the US. There should be regional events, as I think one big one would draw the attention away from the international conference. Why should you fly to Beijing for the FOSS4G 2012 if you can meet the majority of the developers in the US as well?

Final words

The FOSS4G was great. It was organized well and people were always out in the evenings. The only minor nitpick is that many people working remote had the city of their company in the name badge and not the one they live in. It seems that the original for you had to fill was confusing. So for next year it should perhaps say “Location where you live”. Hence I still don't believe that there were more Dutch than German people at the conference (Tik hem aan, ouwe! ;)

Categories: en, CouchDB, GeoCouch, MapQuery, Erlang, JavaScript, geo

Bolsena hacking event

2010-06-11 22:35

The OSGeo hacking event in Bolsena/Italy was great. Many interesting people sitting the whole day in front of their laptops surrounded by a beautiful scenery and nice warm sunny weather. It gets even better when you get meat for lunch and dinner.

I had the chance to tell people a bit more about CouchDB and Couchapps,

One project I haven't heard that much before of was Degree. They build the whole stack of OGC services you could imagine. For me it was of interest that they have a blob storage in their upcoming 3.0 release. The data isn't flattened into SQL tables but stored as blobs. This sounds like good use for a CouchDB backend in the future.

I was working with Simon Pigot on a GeoNetwork re-implementation based on CouchDB using Couchapp. We got the basic stuff like putting an XML document into the database, editing it and returning the new document, as well as fulltext indexing with couchdb-lucene work. Next steps are improving the JSON to XML mapping and integrating spatial search based on GeoCouch.

The event was really enjoyable, thanks Couchio for sponsoring the trip, thanks Jeroen for organizing it, and thanks all other hackers that made it such a awesome event. Hope to see you next year!

Categories: en, CouchDB, JavaScript, geo

Drag as long as you want

2009-11-11 22:35

It has been a very long outstanding bug (officially it was a missing feature) in OpenLayers that annoyed me from the first time I’ve been using OpenLayers. I’m talking about ticket #39: “Allow pan-dragging while outside map until mouseup”.

Normally when you drag the map in OpenLayers it will stop dragging as soon as you hit the edge of the map viewport (the div that contains the map). Whenever you have a small map, but a huge window and a loooong way to drag, it can get quite annoying, as the maximum distance you can drag at once is the size of that viewport.

But yesterday it finally happend. A patch to fix it landed in trunk. A first rough cut was made at the OpenLayers code sprint at the FOSS4G. Andreas Hocevar reviewed the code and made a more unobtrusive version of it (thanks, again).

Try these two examples to see the difference. Click on the map an drag it a long way to the right and back to the left again (you might need to zoom it a bit to see the full effect):

As it is a new feature, it isn’t enabled by default (and only available on current SVN trunk, it will be available in OpenLayers 2.9). To enable it on your map, just use the following code to add the documentDrag parameter to the DragPan control (you obviously need a recent SVN checkout).

Update (2009-11-18): It got even easier with r9805:

// Use default controls but with documentDrag enabled.
var controls = [
    new OpenLayers.Control.Navigation({documentDrag: true}),
    new OpenLayers.Control.PanZoom(),
    new OpenLayers.Control.ArgParser(),
    new OpenLayers.Control.Attribution()]
map = new OpenLayers.Map('map', {controls: controls});

For a full working version have a look at the source of the documentDrag example.

Categories: en, OpenLayers, JavaScript, geo

Poor man’s bounding box queries with CouchDB

2009-07-19 22:35

Several people store geographical points within CouchDB and would like to make a bounding box query on them. This isn’t possible with plain CouchDB _views. But there’s light at the end of the tunnel. One solution will be GeoCouch (which can do a lot more than simple bounding box queries), once there’s a new release, the other one is already there: you can use a the list/show API (Warning: the current wiki page (as at 2009-07-19) applies to CouchDB 0.9, I use the new 0.10 API).

You can either add a _list function as described in the documentation or use my futon-list branch which includes an interface for easier _list function creation/editing.

Your data

The _list function needs to match your data, thus I expect documents with a field named location which contains an array with the coordinates. Here’s a simple example document:


{
   "_id": "00001aef7b72e90b991975ef2a7e1fa7",
   "_rev": "1-4063357886",
   "name": "Augsburg",
   "location": [
       10.898333,
       48.371667
   ],
   "some extra data": "Zirbelnuss"
}

The _list function

We aim at creating a _list function that returns the same response as a normal _view would return, but filtered with a bounding box. Let’s start with a _list function which returns the same results as plain _view (no bounding box filtering, yet). The whitespaces of the output differ slightly.

function(head, req) {
    var row, sep = '\n';

    // Send the same Content-Type as CouchDB would
    if (req.headers.Accept.indexOf('application/json')!=-1)
      start({"headers":{"Content-Type" : "application/json"}});
    else
      start({"headers":{"Content-Type" : "text/plain"}});

    send('{"total_rows":' + head.total_rows +
         ',"offset":'+head.offset+',"rows":[');
    while (row = getRow()) {
        send(sep + toJSON(row));
        sep = ',\n';
    }
    return "\n]}";
};

The _list API allows to you add any arbitrary query string to the URL. In our case that will be bbox=west,south,east,north (adapted from the OpenSearch Geo Extension). Parsing the bounding box is really easy. The query parameters of the request are stored in the property req.query as key/value pairs. Get the bounding box, split it into separate values and compare it with the values of every row.

var row, location, bbox = req.query.bbox.split(',');
while (row = getRow()) {
    location = row.value.location;
    if (location[0]>bbox[0] && location[0]<bbox[2] &&
            location[1]>bbox[1] && location[1]<bbox[3]) {
        send(sep + toJSON(row));
        sep = ',\n';
    }
}

And finally we make sure that no error message is thrown when the bbox query parameter is omitted. Here’s the final result:

function(head, req) {
    var row, bbox, location, sep = '\n';

    // Send the same Content-Type as CouchDB would
    if (req.headers.Accept.indexOf('application/json')!=-1)
      start({"headers":{"Content-Type" : "application/json"}});
    else
      start({"headers":{"Content-Type" : "text/plain"}});

    if (req.query.bbox)
        bbox = req.query.bbox.split(',');

    send('{"total_rows":' + head.total_rows +
         ',"offset":'+head.offset+',"rows":[');
    while (row = getRow()) {
        location = row.value.location;
        if (!bbox || (location[0]>bbox[0] && location[0]<bbox[2] &&
                      location[1]>bbox[1] && location[1]<bbox[3])) {
            send(sep + toJSON(row));
            sep = ',\n';
        }
    }
    return "\n]}";
};

An example how to access your _list function would be: http://localhost:5984/geodata/_design/designdoc/_list/bbox/viewname?bbox=10,0,120,90&limit=10000

Now you should be able to filter any of your point clouds with a bounding box. The performance should be alright for a reasonable number of points. A usual use-case would something like displaying a few points on a map, where you don’t want to see zillions of them anyway.

Stay tuned for a follow-up posting about displaying points with OpenLayers.

Categories: en, CouchDB, JavaScript, geo

By Volker Mische

Powered by Kukkaisvoima version 7