JSON round trip with Node.js

One of the first things you need to do, if you’re serious about writing a RIA with a JavaScript backend, is be able to quickly send messages to and from the server. JSON is obviously the best format for JavaScript-to-JavaScript communication. So, I set up a simple example of a node.js server that can both send and receive JSON objects via AJAX, and cache them in memory on the server. The full code of the example is out on github:

I’m going to pluck out the juicy bits right here, though, and explain them.

Client To Server

The first thing you need to do is be able to POST a JSON object. This is easy enough with jQuery:

function put(id, data, callback) {
 $.ajax('http://127.0.0.1:8181/' + id + '/', {
 type: 'POST',
 data: JSON.stringify(data),
 contentType: 'text/json',
 success: function() { if ( callback ) callback(true); },
 error : function() { if ( callback ) callback(false); }
 });
}

Note that the body of the POST is not URL encoded, like that of a POSTed form: that’s verbose and wasteful, and gets us nothing since we’d have to decode it on the server anyway. Note also that I’m using JSON.stringify. This is in the ECMA-262 standard, built into modern browsers, and Douglas Crockford has written a JSON compatibility library for legacy browsers.

The next step is to receive that message on the server. Inside of a HTTP response handler:

http.createServer(function(request, response) {
 ...
 if ( request.method === 'POST' ) {
 // the body of the POST is JSON payload.
 var data = '';
 request.addListener('data', function(chunk) { data += chunk; });
 request.addListener('end', function() {
 store[id] = JSON.parse(data);
 response.writeHead(200, {'content-type': 'text/plain' });
 response.end()
 });
 }
 ...
}

The request is emitting multiple “data” events, each with a piece of the JSON string: we have to accumulate all of these into one string. When all data is received, the “end” event is emitted, and we can proceed to parse the now-complete JSON string. In this case our handling consists only of tucking away the deserialized object in the store. Afterwards, we return a empty document with a “200 OK” status.

I should probably do error handling on the JSON.parse as it’s likely to throw an exception, but I forgot. Typical error handling looks like this:

try {
 store[id] = JSON.parse(data);
} catch ( e ) {
 response.writeHead(500, {'content-type': 'text/plain' });
 response.write('ERROR:' + e);
 response.end('\n');
}

Server To Client

This is very simple. On the server, we just have to get the object out of the store, serialize it, and write it out.

if ( request.method === 'GET' ) {
 // exact id lookup.
 if ( id in store ) {
 response.writeHead(200, {'content-type': 'text/json' });
 response.write( JSON.stringify(store[id]) );
 response.end('\n');
 } else {
 response.writeHead(404, {'content-type': 'text/plain' });
 response.write('no data for ' + id);
 response.end('\n');
 }
}

Note that I’m using the mime type text/json. The official MIME type is application/json, but I’ve had trouble with frameworks treating that as unencoded binary data. You should probably use the standard, though, unless you have a good reason.

jQuery supports JSON data right out of the box, so there’s barely anything for us to do on the client:

function get(id, callback) {
 $.ajax('http://127.0.0.1:8181/' + id + '/', {
 type: 'GET',
 dataType: 'json',
 success: function(data) { if ( callback ) callback(data); },
 error : function() { if ( callback ) callback(null); }
 });
}

Conclusion

It’s easy to send JSON from the client to the server, and even easier to get it from the server to the client. There are no no mismatched data types, no parsing or serialization algorithms, just two environments that speak the same language communicating in a minimal (but not trivial) subset of that language. Can you see why I’m so excited about this stuff?

Reference : http://oranlooney.com/json-round-trip/

Advertisements

How to Use the Table Storage Service from Node.js

This guide shows you how to perform common scenarios using the Windows Azure Table storage service. The samples are written written using the Node.js API. The scenarios covered include creating and deleting a table, inserting and querying entities in a table. For more information on tables, see the Next Steps section.

Table of Contents

What is the Table Service?
Concepts
Create a Windows Azure Storage Account
Create a Node.js Application
Configure your Application to Access Storage
Setup a Windows Azure Storage Connection
How To: Create a Table
How To: Add an Entity to a Table
How To: Update an Entity
How to: Change a Group of Entities
How to: Query for an Entity
How to: Query a Set of Entities
How To: Query a Subset of Entity Properties
How To: Delete an Entity
How To: Delete a Table
Next Steps

What is the Table Service?

The Windows Azure Table storage service stores large amounts of structured data. The service accepts authenticated calls from inside and outside the Windows Azure cloud. Windows Azure tables are ideal for storing structured, non-relational data. Common uses of Table services include:

  • Storing a huge amount of structured data (many TB) that is automatically scaled to meet throughput demands
  • Storing datasets that don’t require complex joins, foreign keys, or stored procedures and can be denormalized for fast access
  • Quickly querying data such as user profiles using a clustered index

You can use the Table service to store and query huge sets of structured, non-relational data, and your tables scale when volume increases.

Concepts

The Table service contains the following components:

Table1

  • URL format: Code addresses tables in an account using this address format:
    http://<storage account>.table.core.windows.net/<table>

    You can address Azure tables directly using this address with the OData protocol. For more information, see OData.org

  • Storage Account: All access to Windows Azure Storage is done through a storage account. The total size of blob, table, and queue contents in a storage account cannot exceed 100TB.
  • Table: A table is an unlimited collection of entities. Tables don’t enforce a schema on entities, which means a single table can contain entities that have different sets of properties. An account can contain many tables.
  • Entity: An entity is a set of properties, similar to a database row. An entity can be up to 1MB in size.
  • Properties: A property is a name-value pair. Each entity can include up to 252 properties to store data. Each entity also has three system properties that specify a partition key, a row key, and a timestamp. Entities with the same partition key can be queried more quickly, and inserted/updated in atomic operations. An entity’s row key is its unique identifier within a partition.

Continue reading

Node.js + MongoDB = Love: Guest Post from MongoLab


Node.js with the popular document-oriented MongoDB make for a deeply powerful and robust application platform. Or in other words, they rock.

(Note: This blog post was contributed by Ben Wen of MongoLab – a Joyent Partner and provider of MongoDB hosting, support and analytics)

Pair Joyent Cloud’s hosted node.js SmartMachine Appliance with MongoLab’s hosted MongoDB and the integration becomes downright operatic. Angels sing. Trumpets blare. Grey storm thunderheads of object-relational-mapping haze part. Revealed are golden rays of low-impedance JSON object storage and query. All in the fertile green valley of asynchronous JavaScript on the unflappable, cool bedrock of Joyent’s SmartMachine hosting platform. Songbirds tweet. Life is good. Metaphors strain.

More prosaically, the high performance asynchronous design of node.js and the tunable latency/consistency of MongoDB mean a high throughput application can be assembled in a compressed timeframe and with standard tools you probably have laying around the home. Since MongoLab runs managed hosted MongoDB instances on Joyent’s Cloud near a node.js SmartMachine, you get world-class operation of both environments.

Below, we’ll take a quick spin setting up a MongoLab database and a no.de account. We’ll build a minimalistic Web Server that can do some data inserts and queries and display it through a gratuitous 3D guestbook demo.

For the impatient

  1. Sign up at mongolab.com and create a MongoLab database on Joyent Cloud and note database name,hostname, port, database username/password
  2. Sign up at no.de and start a SmartMachine
  3. git clone git://github.com/mongolab/demo-node-01.git
  4. Modify config.js with database credentials and connection info from Step 1.
  5. git commit -a -m "updated config"
    git remote add mongolabdemo <your no.de machine>.no.de
    git push mongolabdemo master
  6. Point your WebGL capable browser to <your no.de machine>.no.de and enjoy.

For the really impatient

  1. Go to nodejs.mongolab.com with your WebGL compatible browser

What is MongoDB?

First a quick word about MongoDB for the newly initiated. MongoDB is a non-relational database system that emphasizes horizontal scale across multiple servers, tunable consistency, and high performance. Being a document-database, it uses JSON notation to describe data and sports a rich query language with indexes to enhance query speed. It also has a map-reduce framework for more intense data analysis and transformation. There is growing adoption of MongoDB for large stores of documents like in a Content Management System or in data analytics, for feature-rich Web 2.0 sites and games, and for persistent stores for mobile applications. Its code is open source licensed under the Gnu AGPL v3.0 and is commercially licensed from its author, 10Gen. Large corporations and smaller outfits are using MongoDB in production today. New users, you are in good company.

Continue reading