Android Cloud to Device Messaging Framework

Android Cloud to Device Messaging (C2DM) is  a   service that helps developers  send data from servers to their applications on  Android devices. The  service provides a simple, lightweight mechanism that servers can use  to tell mobile applications to contact the server directly, to fetch  updated application or user data. The C2DM service handles all aspects  of queueing of messages and delivery to the target application running  on the target device.


Here are the primary characteristics of Android Cloud to Device  Messaging (C2DM):

  • It allows third-party application servers to send lightweight messages to their Android applications. The messaging service is not designed for sending a lot of user content via the messages. Rather, it should be used to tell the application that there is new data on the server, so that the application can fetch it.
  • C2DM makes no guarantees about delivery or the order of messages. So, for example, while you might use this feature to tell an instant messaging application that the user has new messages, you probably would not use it to pass the actual messages.
  • An application on an Android device doesn’t need to be running to receive messages. The system will wake up the application via Intent broadcast when the the  message arrives, as long as the application is set up with the proper broadcast receiver and permissions.
  • It does not provide any  built-in user interface or other handling for message data. C2DM  simply passes raw message data received straight to the application,  which has full control of how to handle it. For example, the application might post a notification, display a custom user interface, or  silently sync data.
  • It requires devices running Android 2.2 or higher that also have the Market application installed. However, you are not limited to deploying your applications through Market.
  • It uses an existing connection for Google services. This requires users to set up their Google account on their mobile devices.

Architectural Overview

This section gives an overview of how C2DM works.

This table summarizes the key terms and concepts involved in C2DM. It is divided into these categories:

  • Components — The physical entities that play a role in C2DM.
  • Credentials — The IDs and tokens that are used in different stages of C2DM to ensure that all parties have been authenticated, and that the message is going to the correct place.
Mobile Device The device that is running an Android application that uses C2DM. This must be a 2.2 Android device that has Market installed, and it must have at least one logged in Google account.
Third-Party Application Server An application server that  developers  set up as part of implementing C2DM in their applications. The third-party application server sends data to an Android application on the device via the C2DM server.
C2DM Servers The Google servers involved in taking messages from the third-party application server and sending them to the device.
Sender ID An email account associated with the application’s developer. The sender ID is used in the registration process to identify a Android application that is permitted to send messages to the device. This ID is typically role-based rather than being a personal account—- for example,
Application ID The application that is registering to receive messages. The application is identified by the package name from the manifest. This  ensures that the messages are targeted to the correct application.
Registration ID An ID issued by the C2DM servers to the Android application that allows it to receive messages. Once the application has the registration ID, it sends it to the third-party application server, which uses it to identify each device  that has registered to receive messages for a given application. In other words, a registration ID is tied to a particular application running on a particular device.
Google User Account For C2DM to work, the mobile device must include at least one logged in Google account.
Sender Auth Token A ClientLogin Auth token that is saved on the third-party application server that gives the application server authorized access to Google services. The token is included in the header of POST requests  that send messages. For more discussion of ClientLogin Auth tokens, see ClientLogin for Installed Applications.

Lifecycle Flow

Here are the primary processes involved in cloud-to-device messaging:

  • Enabling C2DM. An Android application running on a mobile device registers to receive messages.
  • Sending a message. A third-party application server sends messages to the device.
  • Receiving a message. An Android application receives a message from a C2DM server.

These processes are described in more detail below.

Enabling C2DM

This is the sequence of events that occurs when an Android application running on a mobile device registers to receive messages:

  1. The first time the application needs to use the messaging service, it fires off a registration Intent to a C2DM server.This registration Intent (  includes the sender ID (that is, the account authorized to send messages to the  application, which is typically the email address of an account set up by the  application’s developer), and the application ID.
  2. If the registration is successful, the C2DM server broadcasts aREGISTRATIONIntent which gives the application  a registration ID.The application should store this ID for later use. Note that Google may periodically refresh the registration ID, so you should design your application with the understanding that the REGISTRATION Intent may be called multiple times. Your application needs to be able to respond accordingly.
  3. To complete the registration, the application sends the registration ID to the application server. The application server typically stores the registration ID in a database.

The registration ID lasts until the application explicitly unregisters itself, or until Google refreshes the registration ID for your application.

Sending a Message

For an application server to send a  message, the following things must be in place:

  • The application has a registration ID that allows it to receive messages for a particular device.
  • The third-party application server has stored the registration ID.

There is one more thing that needs to be in place for the application server to send messages:  a ClientLogin authorization token. This is something that the developer must have already set up on the application server for the application (for more discussion, seeRole of the Third-Party Application Server). Now it will get used to send messages to the device.

The ClientLogin token authorizes the application server to send messages to a particular Android application. An application server has one ClientLogin token for a particular 3rd party app, and multiple registration IDs. Each registration ID represents a particular device that has registered to use the messaging service for a particular 3rd party app.

Here is the sequence of events that occurs when the application server sends a  message:

  1. The application server sends a  message to  C2DM servers.
  2. Google enqueues and stores the message in case the device is inactive.
  3. When the device is online, Google sends the message to the device.
  4. On the device, the system  broadcasts the  message to the specified application via Intent broadcast with proper permissions, so that only the targeted application gets the message. This wakes the application up. The application does not need to be running beforehand to receive the message.
  5. The application processes the message. If the application is doing non-trivial processing, you may want to grab a wake lock and do any processing in a Service.

An application can  unregister C2DM if it no longer wants to receive  messages.

Receiving a Message

This is the sequence of events that occurs when an Android application running on a mobile device receives a message:

  1. The system receives the incoming message and extracts the raw key/value pairs from the message payload.
  2. The system passes the key/value pairs to the targeted Android application in a Intent as a set of extras.
  3. The Android application extracts the raw data from the RECEIVE Intent by key and processes the data.

What Does the User See?

When mobile device users install an applications that include C2DM, they will get a permission from Android Market informing them that the application includes C2DM. They must approve the use of this feature to install the application. Depending on the implementation of the application, it may offer users the option of unregistering to receive messages. Uninstalling the application also has the effect of unregistering.

Writing Android Applications that Use C2DM

To write Android applications that use C2DM, you must have an application server that can perform the tasks described in Role of the Third-Party Application Server. This section describes the steps you take to create a client application that uses C2DM.

Remember that there is no user interface associated with the C2DM Framework. However you choose to process messages in your application is up to you.

There are two primary steps involved in writing a client application:

  • Creating a manifest that contains the permissions the application needs to use C2DM.
  • Implementing your Java code. To use C2DM, this implementation must include:
    • Code to start and stop the registration service.
    • Receivers  for

Creating the Manifest

Every application must have an AndroidManifest.xml file (with precisely that   name) in its root directory.  The manifest presents essential information about   the application to the Android system, information the system must have before   it can run any of the application’s code (for more discussion of the manifest file, see the Android Developers Guide). To use the C2DM feature, the manifest must include the following:

  • states that the application has permission register and receive messages.
  • android.permission.INTERNET states that the application has permission to send the receiver key to the 3rd party server.
  • applicationPackage + ".permission.C2D_MESSAGE prevents other applications from registering and receiving the application’s messages.
  • Receivers for, with the category set as applicationPackage. The receiver should require permission, so that only the C2DM Framework can send it the message. Note that both registration and the receiving of messages are implemented as Intents.
  • If the C2DM feature is critical to the application’s function, be sure to set android:minSdkVersion="8" in the manifest. This ensures that the application cannot be installed in an environment in which it could not run properly.

Received C2D_MESSAGE Intents  have all key/value pairs sent by the 3rd party server as extras. One special key is collapse_key, which is specified by the sender to allow handling of messages waiting for an off-line device.

Here are excerpts from a manifest that supports C2DM:

<manifest package="com.example.myapp" ...>
   <!-- Only this application can receive the messages and registration result -->     <permission android:name="com.example.myapp.permission.C2D_MESSAGE" android:protectionLevel="signature" />    <uses-permission android:name="com.example.myapp.permission.C2D_MESSAGE" />
   <!-- This app has permission to register and receive message -->    <uses-permission android:name="" />
   <!-- Send the registration id to the server -->    <uses-permission android:name="android.permission.INTERNET" />
   <application...>       <!-- Only C2DM servers can send messages for the app. If permission is not set - any other app can generate it -->        <receiver android:name=".C2DMReceiver" android:permission="">           <!-- Receive the actual message -->           <intent-filter>               <action android:name="" />               <category android:name="com.example.myapp" />           </intent-filter>           <!-- Receive the registration id -->           <intent-filter>               <action android:name="" />               <category android:name="com.example.myapp" />           </intent-filter>       </receiver>       ...    </application>    ... </manifest>

Registering for C2DM

An Android application needs to register with  C2DM servers before receiving any message. To register it needs to send an Intent (, with 2 extra parameters:

  • sender is the ID of the account authorized to send messages to the application, typically the email address of an account set up by the application’s developer.
  • app is the application’s ID, set with a PendingIntent to allow the registration service to extract application information.

For example:

Intent registrationIntent = new Intent(""); registrationIntent.putExtra("app", PendingIntent.getBroadcast(this, 0, new Intent(), 0)); // boilerplate registrationIntent.putExtra("sender", emailOfSender); startService(registrationIntent);

Registration is not complete until the application sends the registration ID to the third-party application server. The application server  uses the registration ID to send messages that are targeted to the application running on that particular device.

Unregistering from C2DM

To unregister from C2DM:

Intent unregIntent = new Intent(""); unregIntent.putExtra("app", PendingIntent.getBroadcast(this, 0, new Intent(), 0)); startService(unregIntent);

Handling Registration Results

As discussed in Creating the Manifest, the manifest defines a receiver for It also defines a receiver Note that both registration and the receiving of messages are implemented as Intents.

The main use of REGISTRATION is to allow the application to receive the registration  ID. The Intent can be sent at any time. Google may periodically refresh the receiver ID. An application receiving this Intent with a registration_id parameter must ensure that the third-party application server receives the registration ID. It may do so by saving the registration ID and sending it to the server. If the network is down or there are errors, the application should retry sending the registration ID  when the network is up again or the next time it starts. An application should keep track of its registration status and attempt to register again if the process is not fully completed.

The REGISTRATION Intent is generated with an errorparameter if the registration couldn’t be completed. If that happens, the application should try again later with exponential back off. When the application unregisters, the REGISTRATION Intent will be sent with an unregistered extra parameter.

Here are the possible error codes for the REGISTRATIONIntent:

Error Code Description
SERVICE_NOT_AVAILABLE The device can’t read the response, or there was a 500/503 from the server that can be retried later. The application should use exponential back off and retry.
ACCOUNT_MISSING There is no Google account on the phone.  The application should ask the user to open the account manager and add a Google account. Fix on the device side.
AUTHENTICATION_FAILED Bad password. The application should ask  the user to enter his/her password, and let user retry manually later. Fix on the device side.
TOO_MANY_REGISTRATIONS The user has too many applications registered.  The application should tell the user to uninstall some other applications, let user retry  manually. Fix on the device side.
INVALID_SENDER The sender account is not recognized.
PHONE_REGISTRATION_ERROR Incorrect phone registration with Google. This phone doesn’t currently support C2DM.

The application receives a REGISTRATION Intent broadcast whenever the application server attempts to send a message to it. But  registration IDs can be nonexistent or invalid for a variety of reasons:

  • If an application is running for the first time, it has no registration ID yet.
  • If the application unregistered, it has no registration ID.
  • The C2DM server periodically refreshes registration IDs.

An application must be prepared to handle every case. For example:

public void onReceive(Context context, Intent intent) {     if (intent.getAction().equals("")) {         handleRegistration(context, intent);     } else if (intent.getAction().equals("")) {         handleMessage(context, intent);      }  }
private void handleRegistration(Context context, Intent intent) {     String registration = intent.getStringExtra("registration_id");      if (intent.getStringExtra("error") != null) {         // Registration failed, should try again later.     } else if (intent.getStringExtra("unregistered") != null) {         // unregistration done, new messages from the authorized sender will be rejected     } else if (registration != null) {        // Send the registration ID to the 3rd party site that is sending the messages.        // This should be done in a separate thread.        // When done, remember that all registration is done.      } }

Handling Received Data

When the C2DM server receives a message from the third-party application server, C2DM extracts the raw key/value pairs from the message payload and passes them to the Android application in Intent as a set of extras. The application extracts the data by key and processes it, whatever that means for that application.

Here is an example:

protected void onReceive(Context context, Intent intent) {     String accountName = intent.getExtras().getString(Config.C2DM_ACCOUNT_EXTRA);     String message = intent.getExtras().getString(Config.C2DM_MESSAGE_EXTRA);     if (Config.C2DM_MESSAGE_SYNC.equals(message)) {         if (accountName != null) {             if (Log.isLoggable(TAG, Log.DEBUG)) {                 Log.d(TAG, "Messaging request received for account " + accountName);             }                          ContentResolver.requestSync(                 new Account(accountName, SyncAdapter.GOOGLE_ACCOUNT_TYPE),                 JumpNoteContract.AUTHORITY, new Bundle());         }     } }

Developing and Testing Your Applications

Here are some guidelines for developing and testing an Android application that uses the C2DM feature:

  • To develop and test your C2DM applications, you need to run and debug the applications on an Android 2.2 system image that includes the necessary underlying Google services.
  • To develop and debug on an actual device, you need a device running an Android 2.2 system image that includes the Market application.
  • To develop and test on the Android Emulator, you need to download the Android 2.2 version of the Google APIs Add-On into your SDK using the Android SDK and AVD Manager. Specifically, you need to download the component named “Google APIs by Google Inc, Android API 8”. Then, you need to set up an AVD that uses that system image.
  • If the C2DM feature is critical to the application’s function, be sure to set android:minSdkVersion="8" in the manifest. This ensures that the application cannot be installed in an environment in which it could not run properly.

Role of the Third-Party Application Server

Before you can write client applications that use the C2DM feature, you must have an HTTPS application server that meets the following criteria:

  • Able to communicate with your client.
  • Able to  fire off HTTP requests to the C2DM server.
  • Able to handle requests and queue data as needed. For example, it should be able to perform exponential back off.
  • Able to store the ClientLogin Auth token and client registration IDs. The ClientLogin Auth token  is included in the header of POST requests that send messages. For more discussion of this topic, see ClientLogin for Installed Applications. The server should store the token and have a policy to refresh it periodically.

How the Application Server Sends Messages

This section describes how the third-party application server sends messages to a 3rd party client application running on a mobile device.

Before the third-party application server can send a  message to an application, it must have received a registration ID from it.

To send a  message, the application server issues a POST request to that includes the following:

Field Description
registration_id The registration ID retrieved from the Android application on the phone. Required.
collapse_key An arbitrary string that is used to collapse a group of like messages when the device is offline, so that only the last message gets sent to the client. This is intended to avoid sending too many messages to the phone when it comes back online. Note that since there is no guarantee of the order in which messages get sent, the “last” message may not actually be the last message sent by the application server. Required.
data.<key> Payload data, expressed as key-value pairs. If present, it will be included in the Intent as application data, with the <key>. There is no  limit on the number of key/value pairs, though there is a limit on the total size of the  message. Optional.
delay_while_idle If included, indicates that the message should not be sent immediately if the device is idle. The server will wait for the device to become active, and then only the last message for each collapse_key value will be sent. Optional.
Authorization: GoogleLogin auth=[AUTH_TOKEN] Header with a ClientLogin Auth token. The cookie must be associated with the ac2dm service. Required.

This table lists the possible response codes:

Response Description
200 Includes body containing:

  • id=[ID of sent message]
  • Error=[error code]
    • QuotaExceeded — Too many messages sent by the sender. Retry after a while.
    • DeviceQuotaExceeded — Too many messages sent by the sender to a specific device. Retry after a while.
    • InvalidRegistration — Missing or badregistration_id. Sender should stop sending messages to this device.
    • NotRegistered — The registration_idis no longer valid, for example user has uninstalled the application or turned off notifications. Sender should stop sending messages to this device.
    • MessageTooBig — The payload of the message is too big, see the limitations. Reduce the size of the message.
    • MissingCollapseKey — Collapse key is required. Include collapse key in the request.
503 Indicates that the server is temporarily unavailable (i.e., because of timeouts, etc ). Sender must retry later, honoring any Retry-Afterheader included in the response. Application servers must implement exponential back off. Senders that create problems risk being blacklisted.
401 Indicates that the ClientLoginAUTH_TOKEN used to validate the sender is invalid.


Here are a few complete examples to get you started:

  • JumpNote. JumpNote is a sample two-way notes application, with auto-sync. JumpNote shows how to use the Android Cloud to Device Messaging Framework, as well as the Android Sync framework. JumpNote runs on top of Google App Engine, and it uses GWT for the web user interface.
  • Google Chrome to Phone Extension. “Chrome to Phone” lets  users send content from the Chrome browser to their mobile device.

To view the source code, open the Source tab and click Browse. From there, you can navigate through the source  tree.


C2DM imposes the following limitations:

  • The message size limit is 1024 bytes.
  • Google limits the number of messages a sender sends in aggregate, and  the number of messages a sender sends to a specific device

Reference :

How to Implement Push Notifications for Android


It’s been around 4 months now since I’ve started developing apps on the Android platform. It began with me scoring a free Nexus One phone at one of the Android Developer Labs.  Obviously, I couldn’t resist trying to hack around with some code, so I downloaded the SDK and dove in. I guess in some sense, that’s exactly what Google was hoping for when they starting giving out free phones. While it might sound like I got lucky, in the end Google is the one who won.

Anyway, developing for the Android platform turned out to the a pleasure. The SDK was easy to setup, easy to use and and easy to understand. Putting together your first app was a breeze. I was very impressed.

Unfortunately, I soon realized that Android is not perfect. One of the things that really disappointed me was the lack of a native method for performing push notifications. Over the past year push notifications became almost a standard in the mobile space thanks to Apple. Even though BlackBerry utlilized Push since god knows when, it was Apple that really brought Push mainstream. Obviously, lack of native Push on Android seems like a huge drawback. Naturally, I started looking around for a solution. After Googling through dozens and dozens of blogs and message boards, I’ve realized that there are 3 generally accepted ways to implement push notifications for your Android app. All of which are non-trivial, hacky and have their own disadvantages. Let’s go over the list:

  • Poll? The name obviously tells you that it’s really not even push. The idea here is to periodically poll the server for new messages from a background local or remote service. The more often you poll the closer you get to the real-time push.
Advantages: easy to implement. no cost solution
Disadvantages: Obviously, you will never be actually real-time. If you polling interval is 30 min, you can get a message that is 29 minutes and 59 seconds late. Moreover, polling more often than every 15-30 min will kill your battery pretty quickly:
  • SMS Android allows you to intercept SMS messages. Your server sends a specially encoded SMS to your phone, whenever there is something new. Your app intercepts all messages, looks for the ones from the server, then pops up a notification.
Advantages: easy to implement. Fully real-time updates. Known drop-in solutions exist such as one provided by Ericsson  Labs:
Disadvantages: Can be costly to you and the user. There are only a few services that allow you send around free SMS and even those are often limited to North America. If you want to have a reliable SMS-based service that is available worldwide, you will likely need to pay. Similar goes for the users. Not everyone has an SMS plan and you don’t want your users getting charged by 3rd party for using your app.
  • Persistent TCP/IP The phone initiates a long-lived mostly idle TCP/IP connection with the server and maintains it by occasionally sending keepalive messages. Whenever there is something new on the server, it sends a messages to the phone over the TCP connection.
Advantages: Fully real-time updates.
Disadvantages: Hard to implement a reliable service on both the phone and the server side. The Android OS is known to be able to kill services when it’s running low on memory, so your notifications service can easily disappear. What happens when your phone goes to sleep? Some people complain about battery life issues related to maintaining an active connection.

The first two methods have significant disadvantages that we cannot do anything about. However, the third method’s drawbacks are not as severe. It seems like with enough work and a good design, the persistent TCP/IP method can work. After all, that’s how GMail, GTalk and Google Voice implement their real-time updates. In fact, many developers out there agree that it is probably the best way to go until Google actually takes the matter in their own hands.

Persistent TCP/IP

After more Googling around I was able to come across three reasonable efforts to implement push notifications using a persistent TCP/IP connection:

While all of the work done by these guys is incredible, none of their results are quite ready for drop-in use by other developers. In my effort to implement push notifications, I decided to put the pieces of the puzzle together and combine their results to produce a relatively stable way of implementing push. The example that I provide you with further, is a combination of Josh Guilfoyle’s TestKeepAlive project and Dale Lane’s MQTT work. I borrow quite a bit of code from those guys, so they should get most of the credit.  Anyways, enough for the introduction, let’s get to the good stuff.

My Idea

The problem with the TestKeepAlive project is that it creates a raw TCP connection, which means that you need write your own server to take care of push on the other side. While it’s, without a question, doable, it is exactly why TestKeepAlive is far from a working solution. On the other hand, the MQTT example shown by Dale Lane uses the IBM’s MQTT broker to handle the server work. To backup a little, MQTT stands for MQ Telemetry Transport, which is a protocol developed by IBM. Let’s take a quick look at the man page:

mqtt is a publish/subscribe messaging protocol intended that is designed to be lightweight. It is useful for use with low power sensors, but is applicable to many scenarios.

Did you see the part about ‘low power’? So did I. Basically, the reason why one might consider using MQTT is that it was designed to be very lightweight, so that it doesn’t consume much power. This is ideal for a mobile push solution as it addresses many battery life related concerns about persistent TCP/IP connections. Obviously, MQTT also has some disadvantages such as privacy, but we can talk about that later.

So, my idea consists of taking a KeepAliveService and replacing the raw TCP/IP connection with an MQTT connection. In this case, each device can simply subscribe to a unique topic which is based on its device ID. Now, assuming that your server knows the device ID, it can push data to the device over MQTT by publishing to that unique topic.


In my example, I utilize a PHP script as a server. This uses the Simple Asynchronous Messaging  library (see project SAM to publish MQTT messages to the broker on which I host on my server. Let’s have a look at the overall system diagram:


wmqtt.jar is a simple drop-in implementation of MQTT protocol provided by IBM. It can be downloaded from The file that you download has a bunch of different stuff. Just look for the right jar file. You can include this jar as a part of your Android app.

Really Small Message Broker (RSMB) is a simple MQTT broker also provided by IBM It runs on port 1883 by default. In our architecture it accepts messages from the server and passes them on to the right devices. RSMB can also be replaced by the Mosquitto server

SAM is a drop-in PHP library for MQTT and other stuff. You can either get it as PECL extension or download the source online

send_mqtt.php is a simple PHP script that accepts messages over POST and uses SAM to pass-on messages to the broker.

Sample Code and Demo


The goal of my work on push notifications was to develop a working demo, which is what all other examples out there lack. I’m happy to say that I accomplished my objective. You can download the sample android app on GitHub.

This app (shown on the left) has a TextView and two buttons. The TextView contains your device ID and the buttons are used to start and stop the push notifications service. Once you have the app on your phone, start the service. Then go to and enter the device ID in the first text box and enter the message text in the textarea below. Press “Send Push Message” and you should get a notification on your phone. It’s as easy as that.

You can see the source code for andoid-push in this GitHub project. It contains the aforementioned send_mqtt.php script.

If you didn’t get a notification, make sure you have network connectivity. It can also be that the broker is down on my server (see server status on the page). If that’s the case, please post a comment and I will look into it bringing the broker back up.

Final Thoughts and Comments

MQTT is definitely not the best way to implement push for Android, but it does work. One of the main drawbacks of MQTT is that anyone who knows the IP and the PORT at which the broker is running can connect and intercept your Push messages. So it’s probably a good idea to encrypt them. Alternatively, you could write your own broker and introduce some sort of authentication to MQTT.

The code I provide here for the push service still needs more testing. Reliability is definitely the main question. I think the code can definitely be improved to better handle connectivity loss and other erroneous situations. You are welcome to post your comments here regarding how it can be improved.

Also let me know if you find any bad bugs. Good luck testing!

Anton Lopyrev

Reference :


Using SQL Server 2012 T-SQL New Features


SQL Server 2012 “Denali” is the next major release of Microsoft database server. There are some new features that are added to T-SQL to make common tasks much easier. I will show how to use some of the new features in this article.


Generating a sequence number, a.k.a. auto number, is a common task in an enterprise application. For a single table, you can specify identity field. But, if you want to have database wide sequential number, then you must devise something by yourself before SQL Server 2012. One solution to this problem is to create a table that has a numeric field can be used to store sequential number, then use SQL to increase it every time used one. In SQL Server 2012, we have a new solution – use Sequence.

Create Sequence

To create a Sequence in SQL Server 2012 is very simple. You can create it with SQL Server Management Studio or T-SQL.

  1. Create Sequence with SQL Server Management Studio
    In Object Explorer window of SQL Server Management Studio, there is a Sequences node under Database -> [Database Name] -> Programmability. You can right click on it to bring up context menu, and then choose NewSequence… to open the New Sequence window. In New Sequence window, you can define the new Sequence, like Sequence Name, Sequence schema, Data type, Precision, Start value, Increment by, etc. After entering all the required information, click OK to save it. The new Sequence will show up in Sequences node.
  2. Create Sequence with T-SQL
    The following T-SQL script is used to create a new Sequence:

     Collapse | Copy Code
    CREATE SEQUENCE DemoSequence
Use Sequence

The new NEXT VALUE FOR T-SQL keyword is used to get the next sequential number from a Sequence.

 Collapse | Copy Code

One thing I want to mention in here is Sequence doesn’t support transaction, if you run this script:

 Collapse | Copy Code

You can see even the transaction is rolled back at the end. The NEXT VALUE FOR will still return the next sequential number. This behavior is consistent with identity field.

Page Data

A common situation for displaying page is how to display large amount of data in DataGrid. Earlier, the programmer usually used the paging feature of DataGrid to handle this situation. Therefore, by choosing a different page number, different set of data are displayed on the screen. However, how to retrieve data from database is multiplicity. A developer could:

  1. Retrieve all data from database, and then let DataGrid to only display the current page data.
  2. Retrieve the current page data from database by using temp table.
  3. Retrieve the current page data from database by using ROW_NUMBER() function.
    The SQL Server 2012 provided a new way to retrieve current page data from database.
 Collapse | Copy Code
FROM Customers

The OFFSET keyword and FETCH NEXT keyword allow the developer to only retrieve certain range data from database. If you compare this script with ROW_NUMBER() function introduced in SQL Server 2008, you can see this script is shorter and more intuitive.

 Collapse | Copy Code
SELECT ROW_NUMBER() OVER(ORDER BY CustomerID) AS sequencenumber, *
FROM Customers) AS TempTable
WHERE sequencenumber > 10 and sequencenumber <= 20

Exception Handling

SQL Server 2005 introduced TRY CATCH block to handle exception in T-SQL. The TRY CATCH block is similar to whatever in C# language except you need always raise a new exception after catching it. There is no way to simply re-throw it.

A sample of T-SQL script with exception handling in SQL Server 2005:

 Collapse | Copy Code
	BEGIN TRANSACTION – Start the transaction

	-- Delete the Customer
	DELETE FROM Customers
	WHERE EmployeeID = ‘CACTU’

	-- Commit the change
	-- There is an error

	-- Raise an error with the details of the exception
	DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
		@ErrSeverity = ERROR_SEVERITY()

	RAISERROR(@ErrMsg, @ErrSeverity, 1)

In SQL Server 2012, by using Throw keyword, the above script will be changed to this:

 Collapse | Copy Code
	BEGIN TRANSACTION -- Start the transaction

	-- Delete the Customer
	DELETE FROM Customers
	WHERE EmployeeID = ‘CACTU’

	-- Commit the change
	-- There is an error

	-- Re throw the exception

Also, you can use Throw to replace RAISERROR function:

 Collapse | Copy Code
THROW 51000, ‘The record does not exist.’, 1;

Enhanced EXECUTE keyword

The EXECUTE keyword is used to execute a command string. The previous version SQL Server only has WITH RECOMPILE option to force new plan to be re-compiled. The SQL Server 2012 dramatically improved this part. The option part is like this right now.

 Collapse | Copy Code
[ WITH <execute_option> [ ,…n ] ]

	| { RESULT SETS ( <result_sets_definition> [,…n] ) }

<result_sets_definition> ::=
		{ column_name
		[ COLLATE collation_name ]
		[ NULL | NOT NULL ] }
		[,…n ]
		[ db_name . [ schema_name ] . | schema_name . ]
		{table_name | view_name | table_valued_function_name }
	| AS TYPE [ schema_name.]table_type_name

The way to use the new added options is like this:

 Collapse | Copy Code
EXEC CustOrderDetail ‘2’
	ProductName1 varchar(100),
	Unitprice1 varchar(100),
	Quantity1 varchar(100),
	Discount1 varchar(100),
	ExtendedPrice1 varchar(100)

Get Metadata

Application sometimes needs more insight of the SQL script result set. In the past, you needed to write a complicated script to query system tables or views, e.g. sys.objects, to get all the information. In SQL Server 2012, the newsystem stored procedure sp_describe_first_set makes the work trivial.

 Collapse | Copy Code
sp_describ_first_result_set @tsql = N’SELECT * FROM customers’


There are more T-SQL new features in the upcoming SQL Server 2012. Majority of them are designed to improve development efficiency and reduce development effort.


Reference :

Top 5 SQL Server 2012 Features

SQL Server 2012 quite handily marks the most excited I’ve been about a SQL Server release since waiting over five years for SQL Server 2005. I’ve written several blog posts about individual SQL Server 2012 features, but I have yet to look back and reflect on what features I’m most excited about. Here are my top five, in no particular order:


Quite simply, SQL Server 2012 is going to revolutionize the way we think about availability and recovery. Today, when we think about these concepts, we consider an individual database (log shipping, replication, mirroring) or an entire SQL Server instance (failover clustering). In SQL Server 2012, in addition to Failover Clustering enhancements, we’ll be able to treat a group of databases as an entity – a much more common requirement with today’s complex, multi-database applications. This entity is called an Availability Group.

Availability Groups will allow us to fail over that group of databases as a single unit. Since a single instance of SQL Server will be able to host multiple availability groups, in the event of a failure, we’ll have the flexibility to fail over one availability group to instance B, another availability group to instance C, and so on. This means that we don’t have to have a standby server that is capable of handling the full load of our primary server – we can distribute those workloads across multiple, lower-powered servers, with the same application transparency we enjoy with traditional mirroring today.

We will also be able to have more than a single replica for each Availability Group, so we can have one or more local copies (protecting us from localized failures) as well as one or more remote copies (protecting us from entire site failures). Finally, read-only secondaries will allow us to run certain backup operations and all reporting activity against the mirrored copy, taking significant load off the primary server (without the maintenance and overhead of snapshots). These are called Active Secondaries in the official literature; however, I have a gripe with this term – I think it is slightly misleading, since it can imply both read and write activity.

And for the first time, we will be able to perform a cluster or mirror failover across subnets in a supported manner, meaning there will be an easy way to set up resiliency between, say, our data centers in Virginia and North Carolina. In addition, a new flexible failover policy allows us to dictate exactly what causes a failover, giving us more control over the tolerance of an instance or availability group to unfavorable conditions within certain server components.

These new AlwaysOn features allow us much more power and flexibility in our efforts toward both high availability and disaster recovery, whether we are using Availability Groups or standard Failover Clustering. Please note that AlwaysOn is not a technology in and of itself – it is merely the branding for the high availability and disaster recovery features in SQL Server

Contained Databases

A common DBA task is to migrate a database to a different instance – to a newer, more powerful server; to a different phase in the application lifecycle (dev -> test -> QA -> staging -> production); or as part of various failover conditions. A problem that has plagued us for a long time in this process is that a database is never really all that independent – there is a lot of scaffolding that has to come along for the ride to ensure that the database continues to function as a dependable component in the application. This scaffolding includes logins, SQL Server Agent jobs, linked servers, custom messages stored in sys.messages, and even differences between server and database collations (which in turn can cause problems when working with temporary objects).

In SQL Server 2012, we have a new feature called Contained Databases, which intends to reduce or eliminate the dependencies that a database has on the SQL Server instance, making it easier to migrate a database to a new instance with less of the work involved in reproducing and validating these dependencies.

One of the issues solved in this release involves the ability to define a user with a password at the database level (a “contained database user”). By authenticating at the database level, you can move the database to a new server, and applications can change their connection string without having to create new SQL Server logins – a common problem we see today is orphaned users and/or mismatched security identifiers (SIDs).

This release also solves an issue involving databases with a different collation than the server’s – today, if you use any objects in tempdb, you may find that collation conflicts are a significant part of your troubleshooting efforts, especially if you move your database between instances. In SQL Server 2012, tempdb will automatically create objects using the collation of the Contained Database, rather than the server default, making it easier to rely on your code regardless of the server collation (of course this still does not resolve issues where you try to join #temp tables generated from multiple databases with different collations).

You could even say that the new THROW()Transact-SQL command can help make these transitions easier, as we will be able to raise custom errors without needing to define these messages first in sys.messages. But, to be clear, this is not an explicit feature of Contained Databases.

There are DMVs and events that will help identify queries and objects that are not “contained” and that will present a potential risk should the database be moved to a new instance. The only containment option in SQL Server 2012, however, is PARTIAL – because containment is only observed, not enforced.  In future releases, we will see this model extended to help deal with SQL Server Agent jobs and linked servers, as well as actual enforcement.

While it’s not complete just yet, Contained Databases give us a fantastic first step towards database autonomy. In my previous job as a production DBA and architect, this functionality could have saved me dozens and dozens of hours in deployment preparation and subsequent troubleshooting.

ColumnStore Indexes

In SQL Server 2012, the relational engine gets to take advantage of the column-based index first seen through the VertiPaq acquisition a few years ago. What a ColumnStore index does is essentially turn a traditional index on its side. Think about a covering index that includes multiple columns, but instead of storing all the different columns in a single row on a single page, split it up and store each column on its own set of pages.

While this is a vast over-simplification at what is happening behind the scenes, the performance benefits can be astounding. With certain types of queries, we experience drastically reduced I/O (even though it sounds like the index will span more pages, not less). Since each column is stored in its own set of pages, this makes compression much more effective – since you are much more likely to have like values on the same page. Think about ordered integers and how effective page compression will be if every value on the first page is “1.” Also think about how effective an aggregate operation will be against a column that is on far fewer pages, without having to read.

You might ask, how does this really differ from a bunch of indexes on individual columns? There isn’t that much difference structurally, except that a ColumnStore index is typically defined on most of the columns in a table. There is a specific star join optimization that has been implemented which makes a query against multiple columns (including aggregates against some or all) much more efficient than the single index that would have been used in conjunction with a bunch of lookups in a traditional OLTP query. This optimization does not work for all queries, and in fact you will see that the ColumnStore index can hinder certain types of queries, such as outer joins and unions. But for certain workloads I have seen reports of performance improvements in excess of 1000%.

Now, nothing is free. In this version, ColumnStore indexes have a major limitation: they are read only. This means that once a ColumnStore index has been created on a table, you can no longer perform any DML operations against that table. So while it seems this makes the very feature far less useful in an OLTP scenario, this is not the case. With native or custom partitioning, for example, you could use a sliding window scenario, having a static set of partitions in a table with a ColumnStore index, and load current data (say, for today only) into a separate table with no ColumnStore index. Once the current day rolls into the next, you can continue loading the new day’s data into yet another table, create the ColumnStore index on yesterday’s (now unchanging) daily table, and then switch it into the partitioned table. You could make this relatively seamless to end users by creating a view that unions the partitioned table with the “current” table. You can also consider loading daily data into a non-ColumnStore table and, at the end of each day, drop the ColumnStore index from the reporting table, insert all of yesterday’s data, and rebuild the ColumnStore index. Whether this is worthwhile or fits your usage scenario depends on various factors, but the performance benefits that are realized on certain queries and workloads certainly make this something worth investigating.

For more information on ColumnStore indexes, see the ColumnStore Performance Tuning Wiki and Eric Hanson’s whitepaper entitled Columnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0.

T-SQL Enhancements

SQL Server 2012 brings many new features to the T-SQL language. Some were added merely to ease the transition to SQL Server from other languages and platforms. Others were added to provide very powerful, new ways to solve cumbersome problems. A few examples:

  • ·         TRY_CONVERT()

    I’ve been bitten many times by bad data in a column using the wrong data type. ISNUMERIC(), for example, is not always reliable; values that return true are not convertible to all numeric types. Today I might try to perform something like this:

    SELECT CONVERT(INT, column) … WHERE ISNUMERIC(column) = 1;

    However, this will fail for values like ‘e’ which are considered numeric but cannot be converted to an integer. TRY_CONVERT() allows you to ignore invalid conversions and return NULL for those values instead of returning an error for the entire query.

  • ·         OFFSET / FETCH

    Many web applications use paging to show 10 or 50 rows per page and allow the user to scroll through each page of results rather than download the entire set. MySQL has had the non-standard LIMIT clause for some time now, and SQL Server users have longed for similarly simple syntax. ORDER BY … OFFSET / FETCH syntax, which is standards-compliant, has been added to the SQL Server 2012. While it doesn’t provide significant performance improvements over the tedious CTE solutions we use today, it certainly makes the code easier to write, as I demonstrated in a blog post last November.

  • ·         FORMAT()

    Using CLR under the covers, we will now finally have relative parity with the .format() function we are used to using in .NET languages like C#. This means no more memorizing cryptic style codes like 101 and 103 for converting datetime values to localized presentation formats, or using all kinds of messy string manipulation to present numeric values with dollar signs and thousands separators. In August I wrote a lengthy blog post about FORMAT() with many examples.

  • ·         Window Function Enhancements

    If you’ve heard of Itzik Ben-Gan, you’re almost certainly aware of what a big fan of window functions he is. I can tell you from first-hand experience that he is absolutely ecstatic about SQL Server 2012’s addition of window offset and distribution functions, as well as enhanced windowed aggregates (including window framing). You can see an intro to these features inhis recent article on, and watch for future articles, as well as an upcoming book devoted entirely to the topic.

These are just a few of the T-SQL enhancements in SQL Server 2012; for a more complete list, see the Programmability Enhancements page in Books Online.

ShowPlan Enhancements

This isn’t an advertised feature, per se, but it’s something I’m excited about nonetheless. From the ShowPlan XML we can now derive much more information about our query plans, making it easier to track down various nagging performance issues.

Of course, you’ll see new operators such as Window Spools (for the new windowing functions), Offsets/Sequences and ColumnStore Index Scans (including whether execution mode was row or batch). There is also information about hint changes first introduced in SQL Server 2008 R2 SP1 (FORCESCAN, and FORCESEEK with columns).

There are also new warnings at the operator level, such as spills to tempdb (from sort operations or aggregates), and more details about implicit conversion warnings (for example, whether a conversion affects cardinality or seek choice). At the statement level, we can see information about things like whether the plan was retrieved from cache. And at the plan level, we can see many new properties, including the reason a plan was not parallel (for example, MAXDOP), detailed memory grant information (including, in some cases, how much memory was requested and how long the grant took), and deeper information about parallel queries (for example, thread usage across NUMA nodes).

To get an idea of all the information that is coming to a ShowPlan near you in SQL Server 2012, and even to get some insight into future features that have yet to be announced, perform a file comparison between the latest XSD and the 2008/2008 R2 version. You can find the SQL Server 2012 XSD at the following location:


    C:\Program Files (x86)\

            Microsoft SQL Server\110\Tools\Binn\schemas\


(For SQL Server 2008/R2, change 110 to 100.)


There are at least a dozen other features in SQL Server 2012 that I highly anticipate simplifying the lives of developers, database administrators and other stakeholders alike. Currently, I feel a little bit of sympathy for anyone I come across still running SQL Server 2000. Sometime next year, I’m going to feel a LOT of sympathy for anyone NOT running SQL Server 2012. If you weren’t already excited about this release, I hope my perspective on a few of its features has helped.

Reference :

Top 12 Features of SQL Server 2012

Microsoft has introduced SQL Server 2012 to the world and it’s time for IT professionals to start to come to speed on what’s new in this highly anticipated version of SQL Server.

1. AlwaysOn Availability Groups — This feature takes database mirroring to a whole new level. With AlwaysOn, users will be able to fail over multiple databases in groups instead of individually. Also, secondary copies will be readable, and can be used for database backups. The big win is that your DR environment no longer needs to sit idle.

2. Windows Server Core Support — If you don’t know what Windows Server Core is, you may want to come up to speed before Windows 8 (MS is making a push back to the command line for server products). Core is the GUI-less version of Windows that uses DOS and PowerShell for user interaction. It has a much lower footprint (50% less memory and disk space utilization), requires fewer patches, and is more secure than the full install. Starting with SQL 2012, it is supported for SQL Server.

3. Columnstore Indexes — This a cool new feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries.

4. User-Defined Server Roles — DBAs have always had the ability to create custom database role, but never server wide. For example, if the DBA wanted to give a development team read/write access to every database on a shared server, traditionally the only ways to do it were either manually, or using undocumented procedures. Neither of which were good solutions. Now, the DBA can create a role, which has read/write access on every DB on the server, or any other custom server wide role.

5. Enhanced Auditing Features — Audit is now available in all editions of SQL Server. Additionally, users can define custom audit specifications to write custom events into the audit log. New filtering features give greater flexibility in choosing which events to write to the log.

6. BI Semantic Model — This is replacing the Analysis Services Unified Dimensional Model (or cubes most people referred to them). It’s a hybrid model that allows one data model will support all BI experiences in SQL Server. Additionally, this will allow for some really neat text infographics

7. Sequence Objects — For those folks who have worked with Oracle, this has been a long requested feature. A sequence is just an object that is a counter — a good example of it’s use would be to increment values in a table, based a trigger. SQL has always had similar functionality with identity columns, but now this is a discrete object.

8. Enhanced PowerShell Support — Windows and SQL Server admins should definitely start brushing up on their PowerShell scripting skills. Microsoft is driving a lot of development effort into instrumenting all of their server-based products with PowerShell. SQL 2008 gave DBAs some exposure to it, but there are many more in cmdlets in SQL 2012.

9. Distributed Replay — Once again this is answer to a feature that Oracle released (Real Application Testing). However, and in my opinion where the real value proposition of SQL Server is, in Oracle it is a (very expensive) cost option to Enterprise Edition. With SQL, when you buy your licenses for Enterprise Edition, you get everything. Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions.

10. PowerView — You may have heard of this under the name “Project Crescent” it is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise.

11. SQL Azure Enhancements — These don’t really go directly with the release of SQL 2012, but Microsoft is making some key enhancements to SQL Azure. Reporting Services for Azure will be available, along with backup to the Windows Azure data store, which is a huge enhancement. The maximum size of an Azure database is now up to 150G. Also Azure data sync allows a better hybrid model of cloud and on-premise solutions

12. Big Data Support — I saved the biggest for last, introduced at the PASS (Professional Association for SQL Server) conference last year, Microsoft announced a partnership with Hadoop provider Cloudera. One part of this involves MS releasing a ODBC driver for SQL Server that will run on a Linux platform. Additionally, Microsoft is building connectors for Hadoop, which is an extremely popular NoSQL platform. With this announcement, Microsoft has made a clear move into this very rapidly growing space.

SQL 2012 is a big step forward for Microsoft — the company is positioning itself to be a leader in availability and in the growing area of big data. As a database professional, I look forward to using SQL 2012 to bring new solutions to my clients.

Reference :

What is PL/SQL?

PL/SQL stands for Procedural Language extension of SQL.

PL/SQL is a combination of SQL along with the procedural features of programming languages. It was developed by Oracle Corporation in the early 90’s to enhance the capabilities of SQL.

The PL/SQL Engine:

Oracle uses a PL/SQL engine to processes the PL/SQL statements. A PL/SQL code can be stored in the client system (client-side) or in the database (server-side).

A Simple PL/SQL Block:

Each PL/SQL program consists of SQL and PL/SQL statements which from a PL/SQL block.

A PL/SQL Block consists of three sections:

  • The Declaration section (optional).
  • The Execution section (mandatory).
  • The Exception (or Error) Handling section (optional).

Declaration Section:

The Declaration section of a PL/SQL Block starts with the reserved keyword DECLARE. This section is optional and is used to declare any placeholders like variables, constants, records and cursors, which are used to manipulate data in the execution section. Placeholders may be any of Variables, Constants and Records, which stores data temporarily. Cursors are also declared in this section.

Execution Section:

The Execution section of a PL/SQL Block starts with the reserved keyword BEGIN and ends with END. This is a mandatory section and is the section where the program logic is written to perform any task. The programmatic constructs like loops, conditional statement and SQL statements form the part of execution section.

Exception Section:

The Exception section of a PL/SQL Block starts with the reserved keyword EXCEPTION. This section is optional. Any errors in the program can be handled in this section, so that the PL/SQL Blocks terminates gracefully. If the PL/SQL Block contains exceptions that cannot be handled, the Block terminates abruptly with errors.

Every statement in the above three sections must end with a semicolon ; . PL/SQL blocks can be nested within other PL/SQL blocks. Comments can be used to document code.

This is how a sample PL/SQL Block looks.

Variable declaration
Program Execution
Exception handling

Webmaster Tools

Webmaster Tools


Some search engines provide webmaster tools that give you detailed information and statistics about how they see and crawl your website. In order to access most of the features, you will have to verify your sites.

All of the Webmaster Tools can be accessed via your Tools –> Available Tools page.

Here’s how you do it with each of the search engines:

Google Webmaster Tools

  1. Log in to with your Google account.
  2. Enter your blog URL and click Add a Site.
  3. You will be presented with several verification methods. Choose the Alternate Methods tab.
  4. Select the option: Add a meta tag to your site's home page
  5. Copy the content value of the meta tag. The content value is the bold text in this example:
    <meta name='google-site-verification' content='dBw5CvburAxi537Rp9qi5uG2174Vb6JwH>
  6. Leave the verification page open and go to your blog dashboard.
  7. Open the Tools –> Available Tools Page and paste the content value in the Google Webmaster Tools field under the Webmaster Tools Verification header.
  8. Click on Save Changes.
  9. Go back to the verification page and click Verify.

↑ Table of Contents ↑

Bing Webmaster Center

  1. Log in to with your Live! account.
  2. Click Add a Site.
  3. Enter your blog URL and click Submit.
  4. Copy the content value of the meta tag. The content value is the bold text in this example:
    <meta name='msvalidate.01' content='12C1203B5086AECE94EB3A3D9830B2E'>
  5. Leave the verification page open and go to your blog dashboard.
  6. Open the Tools –> Available Tools Page and paste the content value in the Bing Webmaster Center field under the Webmaster Tools Verification header.
  7. Click on Save Changes.
  8. Go back to the verification page and click Return to the Site list.


Reference :