OCR and Android

On Device or In The Cloud ?

Before deciding on an OCR library, one needs to decide, where the OCR process should take place: on the Smartphone or in the Cloud. Each approach has its advantages.
On device OCR can be performed without requiring an Internet connection and instead of sending a photo, which can potentially be huge (many phones have 8 or 12 Mega-Pixel cameras now), the text is recognized by an on-board OCR-engine.
However, OCR-libraries tend to be large, i.e. the mobile application will be of considerable size. Depending on the amount of text that needs to be recognized and the available data transfer speed, a cloud-service may provide the result faster. A cloud-service can be updated more easily but individually optimizing (training) an OCR engine may work better when done locally on the device.

Which OCR Library to choose ?

After taking a closer look at the all comparisons, Tesseract stands out. It provides good accuracy, it’s open-source and Apache-Licensed, and has broad language support. It was created by HP and is now developed by Google.

Also, since Tesseract is open source and Apache- Licensed, we can take the source and port it to the Android platform, or put it on a Web-server to run our very own Cloud-service.

A Tesseract is a four- dimensional object, much like a cube is a three-dimensional object. A square has two dimensions. You can make a cube from six squares. A cube has three dimensions. The tesseract is made in the same way, but in four dimensions.

1. Tesseract

The Tesseract OCR engine was developed at Hewlett Packard Labs and is currently sponsored by Google. It was among the top three OCR engines in terms of character accuracy in 1995. http://code.google.com/p/tesseract-ocr/

1.1. Running Tesseract locally on a Mac

Like with so make other Unix and Linux tools, Homebrew (http://mxcl.github.com/homebrew/) is the easiest and most flexible way to install the UNIX tools Apple didn’t include with OS X. Once Homebrew is installed (https://github.com/mxcl/homebrew/wiki/installation), Tesseract can be installed on OS X as easy as:
$ brew install tesseract
Once installed,
$ brew info tesseract will return something like this:

tesseract 3.00

http://code.google.com/p/tesseract-ocr/

Depends on: libtiff
/usr/local/Cellar/tesseract/3.00 (316 files, 11M)
Tesseract is an OCR (Optical Character Recognition) engine.
The easiest way to use it is to convert the source to a Grayscale tiff:
`convert source.png -type Grayscale terre_input.tif`
then run tesseract:
`tesseract terre_input.tif output`

http://github.com/mxcl/homebrew/commits/master/Library/Formula/tesseract.rb


Tesseract doesn’t come with a GUI and instead runs from a command-line interface. To OCR a TIFF-encoded image located on your desktop, you would do something like this:
$ tesseract ~/Desktop/cox.tiff ~/Desktop/cox
Using the image below, Tesseract wrote with perfect accuracy the resulting text into
~/Desktop/cox.txt

There are at least two projects, providing a GUI-front-end for Tesseract on OS X

  1. TesseractGUI, a native OSX client: http://download.dv8.ro/files/TesseractGUI/
  2. VietOCR, a Java Client: http://vietocr.sourceforge.net/

1.2. Running Tesseract as a Could-Service on a Linux Server

One of the fastest and easiest ways to deploy Tesseract as a Web-service, uses Tornado (http://www.tornadoweb.org/), an open source (Apache Licensed) Python non-blocking web server. Since Tesseract accepts TIFF encoded images but our Cloud-Service should rather work with the more popular JPEG image format, we also need to deploy the free Python Imaging Library (http://www.pythonware.com/products/pil/), license terms are here: http://www.pythonware.com/products/pil/license.htm

The deployment on Ubuntu 11.10 64-bit server looks something like this:

sudo apt-get install python-tornado
sudo apt-get install python-imaging
sudo apt-get install tesseract-ocr

1.2.1. The HTTP Server-Script for port 8080

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#!/usr/bin/env python
import tornado.httpserver
import tornado.ioloop
import tornado.web
import pprint
import Image
from tesseract import image_to_string
import StringIO
import os.path
import uuid
class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write('</pre>
<form action="/" method="post" enctype="multipart/form-data">' '
<input type="file" name="the_file" />' '
<input type="submit" value="Submit" />' '</form>
<pre class="prettyprint">')
    def post(self):
        self.set_header("Content-Type", "text/html")
    self.write("") # create a unique ID file
        tempname = str(uuid.uuid4()) + ".jpg"
        myimg = Image.open(StringIO.StringIO(self.request.files.items()[0][1][0  ['body']))
        myfilename = os.path.join(os.path.dirname(__file__),"static",tempname);
        # save image to file as JPEG
        myimg.save(myfilename)
        # do OCR, print result
        self.write(image_to_string(myimg))
        self.write("")
settings = {
    "static_path": os.path.join(os.path.dirname(__file__), "static"),
}
application = tornado.web.Application([
    (r"/", MainHandler),
], **settings)
if __name__ == "__main__":
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(8080)
    tornado.ioloop.IOLoop.instance().start()

The Server receives a JPEG image file and stores it locally in the ./static directory, before calling image_to_string, which is defined in the Python script below:

1.2.2. image_to_string function implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
#!/usr/bin/env python
tesseract_cmd = 'tesseract'
import Image
import StringIO
import subprocess
import sys
import os
__all__ = ['image_to_string']
def run_tesseract(input_filename, output_filename_base, lang=None, boxes=False):
    '''
    runs the command:
        `tesseract_cmd` `input_filename` `output_filename_base`
    returns the exit status of tesseract, as well as tesseract's stderr output
    '''
    command = [tesseract_cmd, input_filename, output_filename_base]
    if lang is not None:
        command += ['-l', lang]
    if boxes:
        command += ['batch.nochop', 'makebox']
    proc = subprocess.Popen(command,
            stderr=subprocess.PIPE)
    return (proc.wait(), proc.stderr.read())
def cleanup(filename):
    ''' tries to remove the given filename. Ignores non-existent files '''
    try:
        os.remove(filename)
    except OSError:
        pass
def get_errors(error_string):
    '''
    returns all lines in the error_string that start with the string "error"
    '''
    lines = error_string.splitlines()
    error_lines = tuple(line for line in lines if line.find('Error') >= 0)
    if len(error_lines) > 0:
        return '\n'.join(error_lines)
    else:
        return error_string.strip()
def tempnam():
    ''' returns a temporary file-name '''
    # prevent os.tmpname from printing an error...
    stderr = sys.stderr
    try:
        sys.stderr = StringIO.StringIO()
        return os.tempnam(None, 'tess_')
    finally:
        sys.stderr = stderr
class TesseractError(Exception):
    def __init__(self, status, message):
        self.status = status
        self.message = message
        self.args = (status, message)
def image_to_string(image, lang=None, boxes=False):
    '''
    Runs tesseract on the specified image. First, the image is written to disk,
    and then the tesseract command is run on the image. Resseract's result is
    read, and the temporary files are erased.
    '''
    input_file_name = '%s.bmp' % tempnam()
    output_file_name_base = tempnam()
    if not boxes:
        output_file_name = '%s.txt' % output_file_name_base
    else:
        output_file_name = '%s.box' % output_file_name_base
    try:
        image.save(input_file_name)
        status, error_string = run_tesseract(input_file_name,
                                             output_file_name_base,
                                             lang=lang,
                                             boxes=boxes)
        if status:
            errors = get_errors(error_string)
            raise TesseractError(status, errors)
        f = file(output_file_name)
        try:
            return f.read().strip()
        finally:
            f.close()
    finally:
        cleanup(input_file_name)
        cleanup(output_file_name)
if __name__ == '__main__':
    if len(sys.argv) == 2:
        filename = sys.argv[1]
        try:
            image = Image.open(filename)
        except IOError:
            sys.stderr.write('ERROR: Could not open file "%s"\n' % filename)
            exit(1)
        print image_to_string(image)
    elif len(sys.argv) == 4 and sys.argv[1] == '-l':
        lang = sys.argv[2]
        filename = sys.argv[3]
        try:
            image = Image.open(filename)
        except IOError:
            sys.stderr.write('ERROR: Could not open file "%s"\n' % filename)
            exit(1)
        print image_to_string(image, lang=lang)
    else:
        sys.stderr.write('Usage: python tesseract.py [-l language] input_file\n')
        exit(2)

1.2.3. The Service deploy/start Script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
description  "OCR WebService"
start on runlevel [2345]
stop on runlevel [!2345]
pre-start script
mkdir /tmp/ocr
mkdir /tmp/ocr/static
cp /usr/share/ocr/*.py /tmp/ocr
end script
exec /tmp/ocr/tesserver.py

After the service has been started, it can be accessed through a Web browser like shown here: http://proton.techcasita.com:8080 I’m currently running tesseract 3.01 on Ubuntu Linux 11.10 64-bit, please be gentle, it runs on an Intel Atom CPU 330 @ 1.60GHz, 4 cores (typically found in Netbooks)The HTML encoded result looks something like this:

1
2
3
4
5
<html><body>Contact Us
www. cox.com
Customer Serv <span class="skype_c2c_print_container notranslate">760-788-9000</span><span id="skype_c2c_container" class="skype_c2c_container notranslate" dir="ltr" tabindex="-1" onmouseover="SkypeClick2Call.MenuInjectionHandler.showMenu(this, event)" onmouseout="SkypeClick2Call.MenuInjectionHandler.hideMenu(this, event)" onclick="SkypeClick2Call.MenuInjectionHandler.makeCall(this, event)" data-numbertocall="+17607889000" data-isfreecall="false" data-isrtl="false" data-ismobile="false"><span class="skype_c2c_highlighting_inactive_common" dir="ltr" skypeaction="skype_dropdown"><span class="skype_c2c_textarea_span" id="non_free_num_ui"><img width="0" height="0" class="skype_c2c_logo_img" src="chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl/call_skype_logo.png"><span class="skype_c2c_text_span">760-788-9000</span><span class="skype_c2c_free_text_span"></span></span></span></span>
Repair 76O—788~71O0
Cox Telephone <span class="skype_c2c_print_container notranslate">888-222-7743</span><span id="skype_c2c_container" class="skype_c2c_container notranslate" dir="ltr" tabindex="-1" onmouseover="SkypeClick2Call.MenuInjectionHandler.showMenu(this, event)" onmouseout="SkypeClick2Call.MenuInjectionHandler.hideMenu(this, event)" onclick="SkypeClick2Call.MenuInjectionHandler.makeCall(this, event)" data-numbertocall="+18882227743" data-isfreecall="true" data-isrtl="false" data-ismobile="false"><span class="skype_c2c_highlighting_inactive_common" dir="ltr" skypeaction="skype_dropdown"><span class="skype_c2c_textarea_span" id="free_num_ui"><img width="0" height="0" class="skype_c2c_logo_img" src="chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl/call_skype_logo.png"><span class="skype_c2c_text_span">888-222-7743</span><span class="skype_c2c_free_text_span"> FREE</span></span></span></span></body></html>

1.3 Accessing the Tesseract Cloud-Service from Android

The OCRTaskActivity below utilizes Android’s built-in AsyncTask as well as Apache Software Foundation’s HttpComponent library HttpClient4.1.2, available here: http://hc.apache.org/httpcomponents-client-ga/index.html OCRTaskActivity expects the image to be passed in as the Intent Extra “ByteArray” of type ByteArray. The OCR result is returned to the calling Activity as OCR_TEXT, like shown here:

setResult(Activity.RESULT_OK, getIntent().putExtra("OCR_TEXT", result));
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
import android.app.Activity;
import android.graphics.BitmapFactory;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.ProgressBar;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.mime.HttpMultipartMode;
import org.apache.http.entity.mime.MultipartEntity;
import org.apache.http.entity.mime.content.ByteArrayBody;
import org.apache.http.entity.mime.content.StringBody;
import org.apache.http.impl.client.DefaultHttpClient;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class OCRTaskActivity extends Activity {
    private static String LOG_TAG = OCRAsyncTaskActivity.class.getSimpleName();
    private static String[] URL_STRINGS = {"http://proton.techcasita.com:8080"};
    private byte[] mBA;
    private ProgressBar mProgressBar;
    @Override
    public void onCreate(final Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.ocr);
        mBA = getIntent().getExtras().getByteArray("ByteArray");
        ImageView iv = (ImageView) findViewById(R.id.ImageView);
        iv.setImageBitmap(BitmapFactory.decodeByteArray(mBA, 0, mBA.length));
        mProgressBar = (ProgressBar) findViewById(R.id.progressBar);
        OCRTask task = new OCRTask();
        task.execute(URL_STRINGS);
    }
    private class OCRTask extends AsyncTask {
        @Override
        protected String doInBackground(final String... urls) {
            String response = "";
            for (String url : urls) {
                try {
                    response = executeMultipartPost(url, mBA);
                    Log.v(LOG_TAG, "Response:" + response);
                    break;
                } catch (Throwable ex) {
                    Log.e(LOG_TAG, "error: " + ex.getMessage());
                }
            }
            return response;
        }
        @Override
        protected void onPostExecute(final String result) {
            mProgressBar.setVisibility(View.GONE);
            setResult(Activity.RESULT_OK, getIntent().putExtra("OCR_TEXT", result));
            finish();
        }
    }
    private String executeMultipartPost(final String stringUrl, final byte[] bm) throws Exception {
        HttpClient httpClient = new DefaultHttpClient();
        HttpPost postRequest = new HttpPost(stringUrl);
        ByteArrayBody bab = new ByteArrayBody(bm, "the_image.jpg");
        MultipartEntity reqEntity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE);
        reqEntity.addPart("uploaded", bab);
        reqEntity.addPart("name", new StringBody("the_file"));
        postRequest.setEntity(reqEntity);
        HttpResponse response = httpClient.execute(postRequest);
        BufferedReader reader = new BufferedReader(new InputStreamReader(response.getEntity().getContent(), "UTF-8"));
        String sResponse;
        StringBuilder s = new StringBuilder();
        while ((sResponse = reader.readLine()) != null) {
            s = s.append(sResponse).append('\n');
        }
        int i = s.indexOf("body");
        int j = s.lastIndexOf("body");
        return s.substring(i + 5, j - 2);
    }
}

1.4. Building a Tesseract native Android Library to be bundled with an Android App

This approach allow an Android application to perform OCR even without a network connection. I.e. the OCR engine is on-board. There are currently two source-bases to start from, the original Tesseract project here:

  1. Tesseract Tools for Android is a set of Android APIs and build files for the Tesseract OCR and Leptonica image processing libraries:
    svn checkout http://tesseract-android-tools.googlecode.com/svn/trunk/ tesseract-android-tools
  2. A fork of Tesseract Tools for Android (tesseract-android-tools) that adds some additional functions:
    git clone git://github.com/rmtheis/tess-two.git

… I went with option 2.

1.4.1. Building the native lib

Each project can be build with the same build steps (see below) and neither works with Android’s NDK r7. However, going back to NDK r6b solved that problem. Here are the build steps. It takes a little while, even on a fast machine.

1
2
3
4
5
6
7
cd <project-directory>/tess-two
export TESSERACT_PATH=${PWD}/external/tesseract-3.01
export LEPTONICA_PATH=${PWD}/external/leptonica-1.68
export LIBJPEG_PATH=${PWD}/external/libjpeg
ndk-build
android update project --path .
ant release

The build-steps create the native libraries in the libs/armabi and libs/armabi-v7a directories.

The tess-two project can now be included as a library-project into an Android project and with the JNI layer in place, calling into the native OCR library now looks something like this:

1.4.2. Developing a simple Android App with built-in OCR capabilities

1
2
3
4
5
6
7
...
TessBaseAPI baseApi = new TessBaseAPI();
baseApi.init(DATA_PATH, LANG);
baseApi.setImage(bitmap);
String recognizedText = baseApi.getUTF8Text();
baseApi.end();
...

1.4.2.1. Libraries / TrainedData / App Size

The native libraries are about 3 MBytes in size. Additionally, a language and font depending training resource files is needed.
The eng.traineddata file (e.g. available with the desktop version of Tesseract) is placed into the main android’s assers/tessdata folder and deployed with the application, adding another 2 MBytes to the app. However, due to compression, the actual downloadable Android application is “only” about 4.1 MBytes.

During the first start of the application, the eng.traineddata resource file is copied to the phone’s SDCard.

The ocr() method for the sample app may look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
protected void ocr() {
        BitmapFactory.Options options = new BitmapFactory.Options();
        options.inSampleSize = 2;
        Bitmap bitmap = BitmapFactory.decodeFile(IMAGE_PATH, options);
        try {
            ExifInterface exif = new ExifInterface(IMAGE_PATH);
            int exifOrientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
            Log.v(LOG_TAG, "Orient: " + exifOrientation);
            int rotate = 0;
            switch (exifOrientation) {
                case ExifInterface.ORIENTATION_ROTATE_90:
                    rotate = 90;
                    break;
                case ExifInterface.ORIENTATION_ROTATE_180:
                    rotate = 180;
                    break;
                case ExifInterface.ORIENTATION_ROTATE_270:
                    rotate = 270;
                    break;
            }
            Log.v(LOG_TAG, "Rotation: " + rotate);
            if (rotate != 0) {
                // Getting width & height of the given image.
                int w = bitmap.getWidth();
                int h = bitmap.getHeight();
                // Setting pre rotate
                Matrix mtx = new Matrix();
                mtx.preRotate(rotate);
                // Rotating Bitmap
                bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
                // tesseract req. ARGB_8888
                bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
            }
        } catch (IOException e) {
            Log.e(LOG_TAG, "Rotate or coversion failed: " + e.toString());
        }
        ImageView iv = (ImageView) findViewById(R.id.image);
        iv.setImageBitmap(bitmap);
        iv.setVisibility(View.VISIBLE);
        Log.v(LOG_TAG, "Before baseApi");
        TessBaseAPI baseApi = new TessBaseAPI();
        baseApi.setDebug(true);
        baseApi.init(DATA_PATH, LANG);
        baseApi.setImage(bitmap);
        String recognizedText = baseApi.getUTF8Text();
        baseApi.end();
        Log.v(LOG_TAG, "OCR Result: " + recognizedText);
        // clean up and show
        if (LANG.equalsIgnoreCase("eng")) {
            recognizedText = recognizedText.replaceAll("[^a-zA-Z0-9]+", " ");
        }
        if (recognizedText.length() != 0) {
            ((TextView) findViewById(R.id.field)).setText(recognizedText.trim());
        }
    }

OCR on Android

The popularity of smartphones, combined with built-in high-quality cameras has created a new category of mobile applications, benefiting greatly from OCR.

OCR is very mature technology with a broad range of available libraries to chose from. There are Apache and BSD licensed, fast and accurate solutions available from the open-source community, I have taken a closer look at Tesseract, which is developed by HP and Google.

Tesseract can be used to build a Desktop application, a CloudService, and even baked into a mobile Android application, performing on-board OCR. All three variation of OCR with the Tesseract library have been demonstrated above.

Focussing on mobile applications, however, it became very clear that even on phones with a 5MP camera, the accuracy of the results still vary greatly, depending on lighting conditions, font, and font-sizes, as well as surrounding artifact.

Just like with the TeleForm application, even the best OCR engines perform purely, if the input-image has not been prepared correctly. To make OCR work on a mobile device, no matter if the OCR will eventually be run onboard or in the cloud, much development time needs to be spend to train the engine – but even more importantly, to select and prepare the image areas that will be provided as input to the OCR engine – it’s going to be all about the pre-processing.

 

Reference: http://wolfpaulus.com/jounal/android-journal/android-and-ocr/

Advertisements

Add Google Cloud Platform as Backend with Android Studio

Android Studio lets you easily add a cloud backend to your application, right from your IDE. A backend allows you to implement functionality such as backing up user data to the cloud, serving content to client apps, real-time interactions, sending push notifications through Google Cloud Messaging for Android (GCM), and more. Additionally, having your application’s backend hosted on Google App Engine means that you can focus on what the cloud application does, without having to worry about administration, reliability or scalability.

When you create a backend using Android Studio, it generates a new App Engine application under the same project, and gives your Android application the necessary libraries and a sample activity to interact with that backend. Support for GCM is built-in, making it easy to sync data across multiple devices. Once you’ve generated the project, you can build and run your client and server code together, in a single environment, and even deploy your backend code right from Android Studio.

In this post we’ll focus on how to get started with the basic setup. From there it’s easy to extend the basic setup to meet your needs.

Preliminary setup
Before you get started, make sure you take care of these tasks first:

Download Android Studio if you haven’t done so already and set it up.
Make sure you have an application project set up in Android Studio. You can use any working app that you want to integrate with your backend, even a sample app.
If you’ll be running the app on an emulator, download the Google APIs Addon from the SDK Manager and run your app on that image.
Create a Google Cloud Platform project: In the Cloud Console, create a new project (or reuse an old one) and make note of the Project ID. Click on the words “Project ID” on the top left to toggle to the Project Number. Copy this as well.
Enable GCM and obtain API Key: In the Cloud Console, click on APIs and turn on the Google Cloud Messaging for Android API. Then, click on the “Register App” button on the top left, enter a name for the app, then select “Android” and “Accessing APIs via a web server”. In the resulting screen, expand the “Server Key” box and copy the API key.
1. Generate an App Engine project
In Android Studio, open an existing Android application that you want to modify, or create a new one. Select the Android app module under the Project node. Then click Tools > Google Cloud Endpoints > Create App Engine Backend.


In the wizard, enter the Project ID, Project Number, and API Key of your Cloud project.

This will create:

An App Engine project which contains the backend application source
An endpoints module with a RegisterActivity class, related resources, and client libraries for the Android app to communicate with the backend
The generated App Engine application (-AppEngine) is an Apache Maven-based project. The Maven pom.xml file takes care of downloading all the dependencies, including the App Engine SDK. This module also contains the following:

A Google Cloud Endpoint (DeviceInfoEndpoint.java, auto-generated from DeviceInfo.java) that your Android app will “register” itself through. Your backend will use that registration info to send a push notification to the device.
A sample endpoint, MessageEndpoint.java, to list previously sent GCM messages and send new ones.
A starter web frontend application (index.html in webapp directory) that will show all the devices that have registered with your service, and a form to send them a GCM notification.
The endpoints module (-endpoints) generated for you contains the classes and libraries needed by the Android application to interact with the backend:

A RegisterActivity.java class that, when invoked, will go through the GCM registration flow and also register itself with the recently created backend through DeviceInfoEndpoint.
Client libraries, so that the application can talk to the backend using an object rather than directly using raw REST calls.
XML files related to the newly created activity.
2. Add GCM registration to your app
In your Android application, you can call RegisterActivity whenever you want the registration to take place (for example, from within the onCreate() method of your main activity.


import android.content.Intent;

@Override
protected void onCreate(Bundle savedInstanceState) {

Intent intent = new Intent(this, RegisterActivity.class);
startActivity(intent);
}
3. Deploy the sample backend server
When you’re ready to deploy an update to your ( the sample ) production backend in the cloud, you can do that easily from the IDE. Click on the “Maven Projects” button on the right edge of the IDE, under Plugins > App Engine, right-click and run the appengine:update goal.

As soon as the update is deployed, you can also access your endpoints through the APIs Explorer at http://.appspot.com/_ah/api/explorer.

For testing and debugging, you can also run your backend server locally without having to deploy your changes to the production backend. To run the backend locally, just set the value of LOCAL_ANDROID_RUN to true in CloudEndpointUtils.java in the App Engine module.

4. Build and run the Android app
Now build and run your Android app. If you called RegisterActivity from within your main activity, the device will register itself with the GCM service and the App Engine app you just deployed. If you are running the app on an emulator, note that GCM functionality requires the Google APIs Addon image, which you can download from the SDK Manager.

You can access your sample web console on any browser at http://.appspot.com. There, you will see that the app you just started has registered with the backend. Fill out the form and send a message to see GCM in action!

Extending the basic setup
It’s easy to expand your cloud services right in Android Studio. You can add new server-side code and through Android Studio instantly generate your own custom endpoints to access those services from your Android app.

Reference:http://android-developers.blogspot.com/2013/06/adding-backend-to-your-app-in-android.html

Advanced NFC for Android

This document describes advanced NFC topics, such as working with various tag technologies, writing to NFC tags, and foreground dispatching, which allows an application in the foreground to handle intents even when other applications filter for the same ones.

Working with Supported Tag Technologies


When working with NFC tags and Android-powered devices, the main format you use to read and write data on tags is NDEF. When a device scans a tag with NDEF data, Android provides support in parsing the message and delivering it in an NdefMessage when possible. There are cases, however, when you scan a tag that does not contain NDEF data or when the NDEF data could not be mapped to a MIME type or URI. In these cases, you need to open communication directly with the tag and read and write to it with your own protocol (in raw bytes). Android provides generic support for these use cases with the android.nfc.tech package, which is described in Table 1. You can use the getTechList() method to determine the technologies supported by the tag and create the corresponding TagTechnology object with one of classes provided by android.nfc.tech

Table 1. Supported tag technologies

Class Description
TagTechnology The interface that all tag technology classes must implement.
NfcA Provides access to NFC-A (ISO 14443-3A) properties and I/O operations.
NfcB Provides access to NFC-B (ISO 14443-3B) properties and I/O operations.
NfcF Provides access to NFC-F (JIS 6319-4) properties and I/O operations.
NfcV Provides access to NFC-V (ISO 15693) properties and I/O operations.
IsoDep Provides access to ISO-DEP (ISO 14443-4) properties and I/O operations.
Ndef Provides access to NDEF data and operations on NFC tags that have been formatted as NDEF.
NdefFormatable Provides a format operations for tags that may be NDEF formattable.

The following tag technlogies are not required to be supported by Android-powered devices.

Table 2. Optional supported tag technologies

Class Description
MifareClassic Provides access to MIFARE Classic properties and I/O operations, if this Android device supports MIFARE.
MifareUltralight Provides access to MIFARE Ultralight properties and I/O operations, if this Android device supports MIFARE.

Working with tag technologies and the ACTION_TECH_DISCOVERED intent

When a device scans a tag that has NDEF data on it, but could not be mapped to a MIME or URI, the tag dispatch system tries to start an activity with the ACTION_TECH_DISCOVERED intent. The ACTION_TECH_DISCOVERED is also used when a tag with non-NDEF data is scanned. Having this fallback allows you to work with the data on the tag directly if the tag dispatch system could not parse it for you. The basic steps when working with tag technologies are as follows:

  1. Filter for an ACTION_TECH_DISCOVERED intent specifying the tag technologies that you want to handle. SeeFiltering for NFC intents for more information. In general, the tag dispatch system tries to start aACTION_TECH_DISCOVERED intent when an NDEF message cannot be mapped to a MIME type or URI, or if the tag scanned did not contain NDEF data. For more information on how this is determined, see The Tag Dispatch System.
  2. When your application receives the intent, obtain the Tag object from the intent:
    Tag tagFromIntent = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
  3. Obtain an instance of a TagTechnology, by calling one of the get factory methods of the classes in theandroid.nfc.tech package. You can enumerate the supported technologies of the tag by callinggetTechList() before calling a get factory method. For example, to obtain an instance ofMifareUltralight from a Tag, do the following:
    MifareUltralight.get(intent.getParcelableExtra(NfcAdapter.EXTRA_TAG));

Reading and writing to tags

Reading and writing to an NFC tag involves obtaining the tag from the intent and opening communication with the tag. You must define your own protocol stack to read and write data to the tag. Keep in mind, however, that you can still read and write NDEF data when working directly with a tag. It is up to you how you want to structure things. The following example shows how to work with a MIFARE Ultralight tag.

package com.example.android.nfc;

import android.nfc.Tag;
import android.nfc.tech.MifareUltralight;
import android.util.Log;
import java.io.IOException;
import java.nio.charset.Charset;

public class MifareUltralightTagTester {

    private static final String TAG = MifareUltralightTagTester.class.getSimpleName();

    public void writeTag(Tag tag, String tagText) {
        MifareUltralight ultralight = MifareUltralight.get(tag);
        try {
            ultralight.connect();
            ultralight.writePage(4, "abcd".getBytes(Charset.forName("US-ASCII")));
            ultralight.writePage(5, "efgh".getBytes(Charset.forName("US-ASCII")));
            ultralight.writePage(6, "ijkl".getBytes(Charset.forName("US-ASCII")));
            ultralight.writePage(7, "mnop".getBytes(Charset.forName("US-ASCII")));
        } catch (IOException e) {
            Log.e(TAG, "IOException while closing MifareUltralight...", e);
        } finally {
            try {
                ultralight.close();
            } catch (IOException e) {
                Log.e(TAG, "IOException while closing MifareUltralight...", e);
            }
        }
    }

    public String readTag(Tag tag) {
        MifareUltralight mifare = MifareUltralight.get(tag);
        try {
            mifare.connect();
            byte[] payload = mifare.readPages(4);
            return new String(payload, Charset.forName("US-ASCII"));
        } catch (IOException e) {
            Log.e(TAG, "IOException while writing MifareUltralight
            message...", e);
        } finally {
            if (mifare != null) {
               try {
                   mifare.close();
               }
               catch (IOException e) {
                   Log.e(TAG, "Error closing tag...", e);
               }
            }
        }
        return null;
    }
}

Using the Foreground Dispatch System


The foreground dispatch system allows an activity to intercept an intent and claim priority over other activities that handle the same intent. Using this system involves constructing a few data structures for the Android system to be able to send the appropriate intents to your application. To enable the foreground dispatch system:

    1. Add the following code in the onCreate() method of your activity:
      1. Create a PendingIntent object so the Android system can populate it with the details of the tag when it is scanned.
        PendingIntent pendingIntent = PendingIntent.getActivity(
            this, 0, new Intent(this, getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);
      2. Declare intent filters to handle the intents that you want to intercept. The foreground dispatch system checks the specified intent filters with the intent that is received when the device scans a tag. If it matches, then your application handles the intent. If it does not match, the foreground dispatch system falls back to the intent dispatch system. Specifying a null array of intent filters and technology filters, specifies that you want to filter for all tags that fallback to the TAG_DISCOVERED intent. The code snippet below handles all MIME types for NDEF_DISCOVERED. You should only handle the ones that you need.
        IntentFilter ndef = new IntentFilter(NfcAdapter.ACTION_NDEF_DISCOVERED);
            try {
                ndef.addDataType("*/*");    /* Handles all MIME based dispatches.
                                               You should specify only the ones that you need. */
            }
            catch (MalformedMimeTypeException e) {
                throw new RuntimeException("fail", e);
            }
           intentFiltersArray = new IntentFilter[] {ndef, };
      3. Set up an array of tag technologies that your application wants to handle. Call theObject.class.getName() method to obtain the class of the technology that you want to support.
        techListsArray = new String[][] { new String[] { NfcF.class.getName() } };
    2. Override the following activity lifecycle callbacks and add logic to enable and disable the foreground dispatch when the activity loses (onPause()) and regains (onResume()) focus.enableForegroundDispatch() must be called from the main thread and only when the activity is in the foreground (calling in onResume() guarantees this). You also need to implement the onNewIntent callback to process the data from the scanned NFC tag.
public void onPause() {
    super.onPause();
    mAdapter.disableForegroundDispatch(this);
}

public void onResume() {
    super.onResume();
    mAdapter.enableForegroundDispatch(this, pendingIntent, intentFiltersArray, techListsArray);
}

public void onNewIntent(Intent intent) {
    Tag tagFromIntent = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
    //do something with tagFromIntent
}

See the ForegroundDispatch sample from API Demos for the complete sample.

Reference : http://developer.android.com/guide/topics/connectivity/nfc/advanced-nfc.html

NFC Basics for Android

This document describes the basic NFC tasks you perform in Android. It explains how to send and receive NFC data in the form of NDEF messages and describes the Android framework APIs that support these features. For more advanced topics, including a discussion of working with non-NDEF data, see Advanced NFC.

There are two major uses cases when working with NDEF data and Android:

  • Reading NDEF data from an NFC tag
  • Beaming NDEF messages from one device to another with Android Beam™

Reading NDEF data from an NFC tag is handled with the tag dispatch system, which analyzes discovered NFC tags, appropriately categorizes the data, and starts an application that is interested in the categorized data. An application that wants to handle the scanned NFC tag can declare an intent filter and request to handle the data.

The Android Beam™ feature allows a device to push an NDEF message onto another device by physically tapping the devices together. This interaction provides an easier way to send data than other wireless technologies like Bluetooth, because with NFC, no manual device discovery or pairing is required. The connection is automatically started when two devices come into range. Android Beam is available through a set of NFC APIs, so any application can transmit information between devices. For example, the Contacts, Browser, and YouTube applications use Android Beam to share contacts, web pages, and videos with other devices.

The Tag Dispatch System


Android-powered devices are usually looking for NFC tags when the screen is unlocked, unless NFC is disabled in the device’s Settings menu. When an Android-powered device discovers an NFC tag, the desired behavior is to have the most appropriate activity handle the intent without asking the user what application to use. Because devices scan NFC tags at a very short range, it is likely that making users manually select an activity would force them to move the device away from the tag and break the connection. You should develop your activity to only handle the NFC tags that your activity cares about to prevent the Activity Chooser from appearing.

To help you with this goal, Android provides a special tag dispatch system that analyzes scanned NFC tags, parses them, and tries to locate applications that are interested in the scanned data. It does this by:

  1. Parsing the NFC tag and figuring out the MIME type or a URI that identifies the data payload in the tag.
  2. Encapsulating the MIME type or URI and the payload into an intent. These first two steps are described inHow NFC tags are mapped to MIME types and URIs.
  3. Starts an activity based on the intent. This is described in How NFC Tags are Dispatched to Applications.

How NFC tags are mapped to MIME types and URIs

Before you begin writing your NFC applications, it is important to understand the different types of NFC tags, how the tag dispatch system parses NFC tags, and the special work that the tag dispatch system does when it detects an NDEF message. NFC tags come in a wide array of technologies and can also have data written to them in many different ways. Android has the most support for the NDEF standard, which is defined by the NFC Forum.

NDEF data is encapsulated inside a message (NdefMessage) that contains one or more records (NdefRecord). Each NDEF record must be well-formed according to the specification of the type of record that you want to create. Android also supports other types of tags that do not contain NDEF data, which you can work with by using the classes in the android.nfc.tech package. To learn more about these technologies, see theAdvanced NFC topic. Working with these other types of tags involves writing your own protocol stack to communicate with the tags, so we recommend using NDEF when possible for ease of development and maximum support for Android-powered devices.

Note: To download complete NDEF specifications, go to the NFC Forum Specification Download site and seeCreating common types of NDEF records for examples of how to construct NDEF records.

Now that you have some background in NFC tags, the following sections describe in more detail how Android handles NDEF formatted tags. When an Android-powered device scans an NFC tag containing NDEF formatted data, it parses the message and tries to figure out the data’s MIME type or identifying URI. To do this, the system reads the first NdefRecord inside the NdefMessage to determine how to interpret the entire NDEF message (an NDEF message can have multiple NDEF records). In a well-formed NDEF message, the first NdefRecordcontains the following fields:

3-bit TNF (Type Name Format)
Indicates how to interpret the variable length type field. Valid values are described in described in Table 1.
Variable length type
Describes the type of the record. If using TNF_WELL_KNOWN, use this field to specify the Record Type Definition (RTD). Valid RTD values are described in Table 2.
Variable length ID
A unique identifier for the record. This field is not used often, but if you need to uniquely identify a tag, you can create an ID for it.
Variable length payload
The actual data payload that you want to read or write. An NDEF message can contain multiple NDEF records, so don’t assume the full payload is in the first NDEF record of the NDEF message.

The tag dispatch system uses the TNF and type fields to try to map a MIME type or URI to the NDEF message. If successful, it encapsulates that information inside of a ACTION_NDEF_DISCOVERED intent along with the actual payload. However, there are cases when the tag dispatch system cannot determine the type of data based on the first NDEF record. This happens when the NDEF data cannot be mapped to a MIME type or URI, or when the NFC tag does not contain NDEF data to begin with. In such cases, a Tag object that has information about the tag’s technologies and the payload are encapsulated inside of a ACTION_TECH_DISCOVERED intent instead.

Table 1. describes how the tag dispatch system maps TNF and type fields to MIME types or URIs. It also describes which TNFs cannot be mapped to a MIME type or URI. In these cases, the tag dispatch system falls back to ACTION_TECH_DISCOVERED.

For example, if the tag dispatch system encounters a record of type TNF_ABSOLUTE_URI, it maps the variable length type field of that record into a URI. The tag dispatch system encapsulates that URI in the data field of anACTION_NDEF_DISCOVERED intent along with other information about the tag, such as the payload. On the other hand, if it encounters a record of type TNF_UNKNOWN, it creates an intent that encapsulates the tag’s technologies instead.

Table 1. Supported TNFs and their mappings

Type Name Format (TNF) Mapping
TNF_ABSOLUTE_URI URI based on the type field.
TNF_EMPTY Falls back to ACTION_TECH_DISCOVERED.
TNF_EXTERNAL_TYPE URI based on the URN in the type field. The URN is encoded into the NDEF type field in a shortened form: <domain_name>:<service_name>. Android maps this to a URI in the form: vnd.android.nfc://ext/<domain_name>:<service_name>.
TNF_MIME_MEDIA MIME type based on the type field.
TNF_UNCHANGED Invalid in the first record, so falls back to ACTION_TECH_DISCOVERED.
TNF_UNKNOWN Falls back to ACTION_TECH_DISCOVERED.
TNF_WELL_KNOWN MIME type or URI depending on the Record Type Definition (RTD), which you set in the type field. See Table 2. for more information on available RTDs and their mappings.

Table 2. Supported RTDs for TNF_WELL_KNOWN and their mappings

Record Type Definition (RTD) Mapping
RTD_ALTERNATIVE_CARRIER Falls back to ACTION_TECH_DISCOVERED.
RTD_HANDOVER_CARRIER Falls back to ACTION_TECH_DISCOVERED.
RTD_HANDOVER_REQUEST Falls back to ACTION_TECH_DISCOVERED.
RTD_HANDOVER_SELECT Falls back to ACTION_TECH_DISCOVERED.
RTD_SMART_POSTER URI based on parsing the payload.
RTD_TEXT MIME type of text/plain.
RTD_URI URI based on payload.

How NFC Tags are Dispatched to Applications

When the tag dispatch system is done creating an intent that encapsulates the NFC tag and its identifying information, it sends the intent to an interested application that filters for the intent. If more than one application can handle the intent, the Activity Chooser is presented so the user can select the Activity. The tag dispatch system defines three intents, which are listed in order of highest to lowest priority:

  1. ACTION_NDEF_DISCOVERED: This intent is used to start an Activity when a tag that contains an NDEF payload is scanned and is of a recognized type. This is the highest priority intent, and the tag dispatch system tries to start an Activity with this intent before any other intent, whenever possible.
  2. ACTION_TECH_DISCOVERED: If no activities register to handle the ACTION_NDEF_DISCOVERED intent, the tag dispatch system tries to start an application with this intent. This intent is also directly started (without starting ACTION_NDEF_DISCOVERED first) if the tag that is scanned contains NDEF data that cannot be mapped to a MIME type or URI, or if the tag does not contain NDEF data but is of a known tag technology.
  3. ACTION_TAG_DISCOVERED: This intent is started if no activities handle the ACTION_NDEF_DISCOVERED orACTION_TECH_DISCOVERED intents.

The basic way the tag dispatch system works is as follows:

  1. Try to start an Activity with the intent that was created by the tag dispatch system when parsing the NFC tag (either ACTION_NDEF_DISCOVERED or ACTION_TECH_DISCOVERED).
  2. If no activities filter for that intent, try to start an Activity with the next lowest priority intent (eitherACTION_TECH_DISCOVERED or ACTION_TAG_DISCOVERED) until an application filters for the intent or until the tag dispatch system tries all possible intents.
  3. If no applications filter for any of the intents, do nothing.

Figure 1. Tag Dispatch System

Whenever possible, work with NDEF messages and the ACTION_NDEF_DISCOVERED intent, because it is the most specific out of the three. This intent allows you to start your application at a more appropriate time than the other two intents, giving the user a better experience.

Requesting NFC Access in the Android Manifest


Before you can access a device’s NFC hardware and properly handle NFC intents, declare these items in yourAndroidManifest.xml file:

  • The NFC <uses-permission> element to access the NFC hardware:
    <uses-permission android:name="android.permission.NFC" />
  • The minimum SDK version that your application can support. API level 9 only supports limited tag dispatch via ACTION_TAG_DISCOVERED, and only gives access to NDEF messages via the EXTRA_NDEF_MESSAGESextra. No other tag properties or I/O operations are accessible. API level 10 includes comprehensive reader/writer support as well as foreground NDEF pushing, and API level 14 provides an easier way to push NDEF messages to other devices with Android Beam and extra convenience methods to create NDEF records.
    <uses-sdk android:minSdkVersion="10"/>
  • The uses-feature element so that your application shows up in Google Play only for devices that have NFC hardware:
    <uses-feature android:name="android.hardware.nfc" android:required="true" />

    If your application uses NFC functionality, but that functionality is not crucial to your application, you can omit the uses-feature element and check for NFC avalailbility at runtime by checking to see ifgetDefaultAdapter() is null.

Filtering for NFC Intents


To start your application when an NFC tag that you want to handle is scanned, your application can filter for one, two, or all three of the NFC intents in the Android manifest. However, you usually want to filter for theACTION_NDEF_DISCOVERED intent for the most control of when your application starts. TheACTION_TECH_DISCOVERED intent is a fallback for ACTION_NDEF_DISCOVERED when no applications filter forACTION_NDEF_DISCOVERED or for when the payload is not NDEF. Filtering for ACTION_TAG_DISCOVERED is usually too general of a category to filter on. Many applications will filter for ACTION_NDEF_DISCOVERED orACTION_TECH_DISCOVERED before ACTION_TAG_DISCOVERED, so your application has a low probability of starting. ACTION_TAG_DISCOVERED is only available as a last resort for applications to filter for in the cases where no other applications are installed to handle the ACTION_NDEF_DISCOVERED orACTION_TECH_DISCOVEREDintent.

Because NFC tag deployments vary and are many times not under your control, this is not always possible, which is why you can fallback to the other two intents when necessary. When you have control over the types of tags and data written, it is recommended that you use NDEF to format your tags. The following sections describe how to filter for each type of intent.

ACTION_NDEF_DISCOVERED

To filter for ACTION_NDEF_DISCOVERED intents, declare the intent filter along with the type of data that you want to filter for. The following example filters for ACTION_NDEF_DISCOVERED intents with a MIME type oftext/plain:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED"/>
    <category android:name="android.intent.category.DEFAULT"/>
    <data android:mimeType="text/plain" />
</intent-filter>

The following example filters for a URI in the form of http://developer.android.com/index.html.

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED"/>
    <category android:name="android.intent.category.DEFAULT"/>
   <data android:scheme="http"
              android:host="developer.android.com"
              android:pathPrefix="/index.html" />
</intent-filter>

ACTION_TECH_DISCOVERED

If your activity filters for the ACTION_TECH_DISCOVERED intent, you must create an XML resource file that specifies the technologies that your activity supports within a tech-list set. Your activity is considered a match if a tech-list set is a subset of the technologies that are supported by the tag, which you can obtain by calling getTechList().

For example, if the tag that is scanned supports MifareClassic, NdefFormatable, and NfcA, your tech-list set must specify all three, two, or one of the technologies (and nothing else) in order for your activity to be matched.

The following sample defines all of the technologies. You can remove the ones that you do not need. Save this file (you can name it anything you wish) in the <project-root>/res/xml folder.

<resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2">
    <tech-list>
        <tech>android.nfc.tech.IsoDep</tech>
        <tech>android.nfc.tech.NfcA</tech>
        <tech>android.nfc.tech.NfcB</tech>
        <tech>android.nfc.tech.NfcF</tech>
        <tech>android.nfc.tech.NfcV</tech>
        <tech>android.nfc.tech.Ndef</tech>
        <tech>android.nfc.tech.NdefFormatable</tech>
        <tech>android.nfc.tech.MifareClassic</tech>
        <tech>android.nfc.tech.MifareUltralight</tech>
    </tech-list>
</resources>

You can also specify multiple tech-list sets. Each of the tech-list sets is considered independently, and your activity is considered a match if any single tech-list set is a subset of the technologies that are returned by getTechList(). This provides AND and OR semantics for matching technologies. The following example matches tags that can support the NfcA and Ndef technologies or can support the NfcB and Ndef technologies:

<resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2">
    <tech-list>
        <tech>android.nfc.tech.NfcA</tech>
        <tech>android.nfc.tech.Ndef</tech>
    </tech-list>
</resources>

<resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2">
    <tech-list>
        <tech>android.nfc.tech.NfcB</tech>
        <tech>android.nfc.tech.Ndef</tech>
    </tech-list>
</resources>

In your AndroidManifest.xml file, specify the resource file that you just created in the <meta-data> element inside the <activity> element like in the following example:

<activity>
...
<intent-filter>
    <action android:name="android.nfc.action.TECH_DISCOVERED"/>
</intent-filter>

<meta-data android:name="android.nfc.action.TECH_DISCOVERED"
    android:resource="@xml/nfc_tech_filter" />
...
</activity>

For more information about working with tag technologies and the ACTION_TECH_DISCOVERED intent, seeWorking with Supported Tag Technologies in the Advanced NFC document.

ACTION_TAG_DISCOVERED

To filter for ACTION_TAG_DISCOVERED use the following intent filter:

<intent-filter>
    <action android:name="android.nfc.action.TAG_DISCOVERED"/>
</intent-filter>

Obtaining information from intents

If an activity starts because of an NFC intent, you can obtain information about the scanned NFC tag from the intent. Intents can contain the following extras depending on the tag that was scanned:

To obtain these extras, check to see if your activity was launched with one of the NFC intents to ensure that a tag was scanned, and then obtain the extras out of the intent. The following example checks for theACTION_NDEF_DISCOVERED intent and gets the NDEF messages from an intent extra.

public void onResume() {
    super.onResume();
    ...
    if (NfcAdapter.ACTION_NDEF_DISCOVERED.equals(getIntent().getAction())) {
        Parcelable[] rawMsgs = intent.getParcelableArrayExtra(NfcAdapter.EXTRA_NDEF_MESSAGES);
        if (rawMsgs != null) {
            msgs = new NdefMessage[rawMsgs.length];
            for (int i = 0; i < rawMsgs.length; i++) {
                msgs[i] = (NdefMessage) rawMsgs[i];
            }
        }
    }
    //process the msgs array
}

Alternatively, you can obtain a Tag object from the intent, which will contain the payload and allow you to enumerate the tag’s technologies:

Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);

Creating Common Types of NDEF Records


This section describes how to create common types of NDEF records to help you when writing to NFC tags or sending data with Android Beam. Starting with Android 4.0 (API level 14), the createUri() method is available to help you create URI records automatically. Starting in Android 4.1 (API level 16), createExternal() andcreateMime() are available to help you create MIME and external type NDEF records. Use these helper methods whenever possible to avoid mistakes when manually creating NDEF records.

This section also describes how to create the corresponding intent filter for the record. All of these NDEF record examples should be in the first NDEF record of the NDEF message that you are writing to a tag or beaming.

TNF_ABSOLUTE_URI

Note: We recommend that you use the RTD_URI type instead of TNF_ABSOLUTE_URI, because it is more efficient.

You can create a TNF_ABSOLUTE_URI NDEF record in the following way:

NdefRecord uriRecord = new NdefRecord(
    NdefRecord.TNF_ABSOLUTE_URI ,
    "http://developer.android.com/index.html".getBytes(Charset.forName("US-ASCII")),
    new byte[0], new byte[0]);

The intent filter for the previous NDEF record would look like this:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED" />
    <category android:name="android.intent.category.DEFAULT" />
    <data android:scheme="http"
        android:host="developer.android.com"
        android:pathPrefix="/index.html" />
</intent-filter>

TNF_MIME_MEDIA

You can create a TNF_MIME_MEDIA NDEF record in the following ways.

Using the createMime() method:

NdefRecord mimeRecord = NdefRecord.createMime("application/vnd.com.example.android.beam",
    "Beam me up, Android".getBytes(Charset.forName("US-ASCII")));

Creating the NdefRecord manually:

NdefRecord mimeRecord = new NdefRecord(
    NdefRecord.TNF_MIME_MEDIA ,
    "application/vnd.com.example.android.beam".getBytes(Charset.forName("US-ASCII")),
    new byte[0], "Beam me up, Android!".getBytes(Charset.forName("US-ASCII")));

The intent filter for the previous NDEF records would look like this:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED" />
    <category android:name="android.intent.category.DEFAULT" />
    <data android:mimeType="application/vnd.com.example.android.beam" />
</intent-filter>

TNF_WELL_KNOWN with RTD_TEXT

You can create a TNF_WELL_KNOWN NDEF record in the following way:

public NdefRecord createTextRecord(String payload, Locale locale, boolean encodeInUtf8) {
    byte[] langBytes = locale.getLanguage().getBytes(Charset.forName("US-ASCII"));
    Charset utfEncoding = encodeInUtf8 ? Charset.forName("UTF-8") : Charset.forName("UTF-16");
    byte[] textBytes = payload.getBytes(utfEncoding);
    int utfBit = encodeInUtf8 ? 0 : (1 << 7);
    char status = (char) (utfBit + langBytes.length);
    byte[] data = new byte[1 + langBytes.length + textBytes.length];
    data[0] = (byte) status;
    System.arraycopy(langBytes, 0, data, 1, langBytes.length);
    System.arraycopy(textBytes, 0, data, 1 + langBytes.length, textBytes.length);
    NdefRecord record = new NdefRecord(NdefRecord.TNF_WELL_KNOWN,
    NdefRecord.RTD_TEXT, new byte[0], data);
    return record;
}

the intent filter would look like this:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED" />
    <category android:name="android.intent.category.DEFAULT" />
    <data android:mimeType="text/plain" />
</intent-filter>

TNF_WELL_KNOWN with RTD_URI

You can create a TNF_WELL_KNOWN NDEF record in the following ways.

Using the createUri(String) method:

NdefRecord rtdUriRecord1 = NdefRecord.createUri("http://example.com");

Using the createUri(Uri) method:

Uri uri = new Uri("http://example.com");
NdefRecord rtdUriRecord2 = NdefRecord.createUri(uri);

Creating the NdefRecord manually:

byte[] uriField = "example.com".getBytes(Charset.forName("US-ASCII"));
byte[] payload = new byte[uriField.length + 1];              //add 1 for the URI Prefix
byte payload[0] = 0x01;                                      //prefixes http://www. to the URI
System.arraycopy(uriField, 0, payload, 1, uriField.length);  //appends URI to payload
NdefRecord rtdUriRecord = new NdefRecord(
    NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload);

The intent filter for the previous NDEF records would look like this:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED" />
    <category android:name="android.intent.category.DEFAULT" />
    <data android:scheme="http"
        android:host="example.com"
        android:pathPrefix="" />
</intent-filter>

TNF_EXTERNAL_TYPE

You can create a TNF_EXTERNAL_TYPE NDEF record in the following ways:

Using the createExternal() method:

byte[] payload; //assign to your data
String domain = "com.example"; //usually your app's package name
String type = "externalType";
NdefRecord extRecord = NdefRecord.createExternal(domain, type, payload);

Creating the NdefRecord manually:

byte[] payload;
...
NdefRecord extRecord = new NdefRecord(
    NdefRecord.TNF_EXTERNAL_TYPE, "com.example:externalType", new byte[0], payload);

The intent filter for the previous NDEF records would look like this:

<intent-filter>
    <action android:name="android.nfc.action.NDEF_DISCOVERED" />
    <category android:name="android.intent.category.DEFAULT" />
    <data android:scheme="vnd.android.nfc"
        android:host="ext"
        android:pathPrefix="/com.example:externalType"/>
</intent-filter>

Use TNF_EXTERNAL_TYPE for more generic NFC tag deployments to better support both Android-powered and non-Android-powered devices.

Note: URNs for TNF_EXTERNAL_TYPE have a canonical format of:urn:nfc:ext:example.com:externalType, however the NFC Forum RTD specification declares that theurn:nfc:ext: portion of the URN must be ommitted from the NDEF record. So all you need to provide is the domain (example.com in the example) and type (externalType in the example) separated by a colon. When dispatching TNF_EXTERNAL_TYPE, Android converts the urn:nfc:ext:example.com:externalType URN to a vnd.android.nfc://ext/example.com:externalType URI, which is what the intent filter in the example declares.

Android Application Records

Introduced in Android 4.0 (API level 14), an Android Application Record (AAR) provides a stronger certainty that your application is started when an NFC tag is scanned. An AAR has the package name of an application embedded inside an NDEF record. You can add an AAR to any NDEF record of your NDEF message, because Android searches the entire NDEF message for AARs. If it finds an AAR, it starts the application based on the package name inside the AAR. If the application is not present on the device, Google Play is launched to download the application.

AARs are useful if you want to prevent other applications from filtering for the same intent and potentially handling specific tags that you have deployed. AARs are only supported at the application level, because of the package name constraint, and not at the Activity level as with intent filtering. If you want to handle an intent at the Activity level, use intent filters.

If a tag contains an AAR, the tag dispatch system dispatches in the following manner:

  1. Try to start an Activity using an intent filter as normal. If the Activity that matches the intent also matches the AAR, start the Activity.
  2. If the Activity that filters for the intent does not match the AAR, if multiple Activities can handle the intent, or if no Activity handles the intent, start the application specified by the AAR.
  3. If no application can start with the AAR, go to Google Play to download the application based on the AAR.

 

Note: You can override AARs and the intent dispatch system with the foreground dispatch system, which allows a foreground activity to have priority when an NFC tag is discovered. With this method, the activity must be in the foreground to override AARs and the intent dispatch system.

If you still want to filter for scanned tags that do not contain an AAR, you can declare intent filters as normal. This is useful if your application is interested in other tags that do not contain an AAR. For example, maybe you want to guarantee that your application handles proprietary tags that you deploy as well as general tags deployed by third parties. Keep in mind that AARs are specific to Android 4.0 devices or later, so when deploying tags, you most likely want to use a combination of AARs and MIME types/URIs to support the widest range of devices. In addition, when you deploy NFC tags, think about how you want to write your NFC tags to enable support for the most devices (Android-powered and other devices). You can do this by defining a relatively unique MIME type or URI to make it easier for applications to distinguish.

Android provides a simple API to create an AAR, createApplicationRecord(). All you need to do is embed the AAR anywhere in your NdefMessage. You do not want to use the first record of your NdefMessage, unless the AAR is the only record in the NdefMessage. This is because the Android system checks the first record of anNdefMessage to determine the MIME type or URI of the tag, which is used to create an intent for applications to filter. The following code shows you how to create an AAR:

NdefMessage msg = new NdefMessage(
        new NdefRecord[] {
            ...,
            NdefRecord.createApplicationRecord("com.example.android.beam")}

Beaming NDEF Messages to Other Devices


Android Beam allows simple peer-to-peer data exchange between two Android-powered devices. The application that wants to beam data to another device must be in the foreground and the device receiving the data must not be locked. When the beaming device comes in close enough contact with a receiving device, the beaming device displays the “Touch to Beam” UI. The user can then choose whether or not to beam the message to the receiving device.

Note: Foreground NDEF pushing was available at API level 10, which provides similar functionality to Android Beam. These APIs have since been deprecated, but are available to support older devices. SeeenableForegroundNdefPush() for more information.

You can enable Android Beam for your application by calling one of the two methods:

An activity can only push one NDEF message at a time, so setNdefPushMessageCallback() takes precedence over setNdefPushMessage() if both are set. To use Android Beam, the following general guidelines must be met:

  • The activity that is beaming the data must be in the foreground. Both devices must have their screens unlocked.
  • You must encapsulate the data that you are beaming in an NdefMessage object.
  • The NFC device that is receiving the beamed data must support the com.android.npp NDEF push protocol or NFC Forum’s SNEP (Simple NDEF Exchange Protocol). The com.android.npp protocol is required for devices on API level 9 (Android 2.3) to API level 13 (Android 3.2). com.android.npp and SNEP are both required on API level 14 (Android 4.0) and later.

Note: If your activity enables Android Beam and is in the foreground, the standard intent dispatch system is disabled. However, if your activity also enables foreground dispatching, then it can still scan tags that match the intent filters set in the foreground dispatching.

To enable Android Beam:

  1. Create an NdefMessage that contains the NdefRecords that you want to push onto the other device.
  2. Call setNdefPushMessage() with a NdefMessage or call setNdefPushMessageCallback passing in aNfcAdapter.CreateNdefMessageCallback object in the onCreate() method of your activity. These methods require at least one activity that you want to enable with Android Beam, along with an optional list of other activities to activate.In general, you normally use setNdefPushMessage() if your Activity only needs to push the same NDEF message at all times, when two devices are in range to communicate. You usesetNdefPushMessageCallback when your application cares about the current context of the application and wants to push an NDEF message depending on what the user is doing in your application.

The following sample shows how a simple activity calls NfcAdapter.CreateNdefMessageCallback in theonCreate() method of an activity (see AndroidBeamDemo for the complete sample). This example also has methods to help you create a MIME record:

package com.example.android.beam;

import android.app.Activity;
import android.content.Intent;
import android.nfc.NdefMessage;
import android.nfc.NdefRecord;
import android.nfc.NfcAdapter;
import android.nfc.NfcAdapter.CreateNdefMessageCallback;
import android.nfc.NfcEvent;
import android.os.Bundle;
import android.os.Parcelable;
import android.widget.TextView;
import android.widget.Toast;
import java.nio.charset.Charset;


public class Beam extends Activity implements CreateNdefMessageCallback {
    NfcAdapter mNfcAdapter;
    TextView textView;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);
        TextView textView = (TextView) findViewById(R.id.textView);
        // Check for available NFC Adapter
        mNfcAdapter = NfcAdapter.getDefaultAdapter(this);
        if (mNfcAdapter == null) {
            Toast.makeText(this, "NFC is not available", Toast.LENGTH_LONG).show();
            finish();
            return;
        }
        // Register callback
        mNfcAdapter.setNdefPushMessageCallback(this, this);
    }

    @Override
    public NdefMessage createNdefMessage(NfcEvent event) {
        String text = ("Beam me up, Android!\n\n" +
                "Beam Time: " + System.currentTimeMillis());
        NdefMessage msg = new NdefMessage(
                new NdefRecord[] { createMime(
                        "application/vnd.com.example.android.beam", text.getBytes())
         /**
          * The Android Application Record (AAR) is commented out. When a device
          * receives a push with an AAR in it, the application specified in the AAR
          * is guaranteed to run. The AAR overrides the tag dispatch system.
          * You can add it back in to guarantee that this
          * activity starts when receiving a beamed message. For now, this code
          * uses the tag dispatch system.
          */
          //,NdefRecord.createApplicationRecord("com.example.android.beam")
        });
        return msg;
    }

    @Override
    public void onResume() {
        super.onResume();
        // Check to see that the Activity started due to an Android Beam
        if (NfcAdapter.ACTION_NDEF_DISCOVERED.equals(getIntent().getAction())) {
            processIntent(getIntent());
        }
    }

    @Override
    public void onNewIntent(Intent intent) {
        // onResume gets called after this to handle the intent
        setIntent(intent);
    }

    /**
     * Parses the NDEF Message from the intent and prints to the TextView
     */
    void processIntent(Intent intent) {
        textView = (TextView) findViewById(R.id.textView);
        Parcelable[] rawMsgs = intent.getParcelableArrayExtra(
                NfcAdapter.EXTRA_NDEF_MESSAGES);
        // only one message sent during the beam
        NdefMessage msg = (NdefMessage) rawMsgs[0];
        // record 0 contains the MIME type, record 1 is the AAR, if present
        textView.setText(new String(msg.getRecords()[0].getPayload()));
    }
}

Note that this code comments out an AAR, which you can remove. If you enable the AAR, the application specified in the AAR always receives the Android Beam message. If the application is not present, Google Play is started to download the application. Therefore, the following intent filter is not technically necessary for Android 4.0 devices or later if the AAR is used:

<intent-filter>
  <action android:name="android.nfc.action.NDEF_DISCOVERED"/>
  <category android:name="android.intent.category.DEFAULT"/>
  <data android:mimeType="application/vnd.com.example.android.beam"/>
</intent-filter>

With this intent filter, the com.example.android.beam application now can be started when it scans an NFC tag or receives an Android Beam with an AAR of type com.example.android.beam, or when an NDEF formatted message contains a MIME record of type application/vnd.com.example.android.beam.

Even though AARs guarantee an application is started or downloaded, intent filters are recommended, because they let you start an Activity of your choice in your application instead of always starting the main Activity within the package specified by an AAR. AARs do not have Activity level granularity. Also, because some Android-powered devices do not support AARs, you should also embed identifying information in the first NDEF record of your NDEF messages and filter for that as well, just in case. See Creating Common Types of NDEF records for more information on how to create records.

Reference : http://developer.android.com/guide/topics/connectivity/nfc/nfc.html

Session Initiation Protocol (SIP) for Android

Android provides an API that supports the Session Initiation Protocol (SIP). This lets you add SIP-based internet telephony features to your applications. Android includes a full SIP protocol stack and integrated call management services that let applications easily set up outgoing and incoming voice calls, without having to manage sessions, transport-level communication, or audio record or playback directly.

Here are examples of the types of applications that might use the SIP API:

  • Video conferencing.
  • Instant messaging.

Here are the requirements for developing a SIP application:

  • You must have a mobile device that is running Android 2.3 or higher.
  • SIP runs over a wireless data connection, so your device must have a data connection (with a mobile data service or Wi-Fi). This means that you can’t test on AVD—you can only test on a physical device. For details, see Testing SIP Applications.
  • Each participant in the application’s communication session must have a SIP account. There are many different SIP providers that offer SIP accounts.

SIP API Classes and Interfaces

Here is a summary of the classes and one interface (SipRegistrationListener) that are included in the Android SIP API:

Class/Interface Description
SipAudioCall Handles an Internet audio call over SIP.
SipAudioCall.Listener Listener for events relating to a SIP call, such as when a call is being received (“on ringing”) or a call is outgoing (“on calling”).
SipErrorCode Defines error codes returned during SIP actions.
SipManager Provides APIs for SIP tasks, such as initiating SIP connections, and provides access to related SIP services.
SipProfile Defines a SIP profile, including a SIP account, domain and server information.
SipProfile.Builder Helper class for creating a SipProfile.
SipSession Represents a SIP session that is associated with a SIP dialog or a standalone transaction not within a dialog.
SipSession.Listener Listener for events relating to a SIP session, such as when a session is being registered (“on registering”) or a call is outgoing (“on calling”).
SipSession.State Defines SIP session states, such as “registering”, “outgoing call”, and “in call”.
SipRegistrationListener An interface that is a listener for SIP registration events.

Creating the Manifest


If you are developing an application that uses the SIP API, remember that the feature is supported only on Android 2.3 (API level 9) and higher versions of the platform. Also, among devices running Android 2.3 (API level 9) or higher, not all devices will offer SIP support.

To use SIP, add the following permissions to your application’s manifest:

  • android.permission.USE_SIP
  • android.permission.INTERNET

To ensure that your application can only be installed on devices that are capable of supporting SIP, add the following to your application’s manifest:

  • <uses-sdk android:minSdkVersion="9" />. This indicates that your application requires Android 2.3 or higher. For more information, see API Levels and the documentation for the <uses-sdk> element.

To control how your application is filtered from devices that do not support SIP (for example, on Google Play), add the following to your application’s manifest:

  • <uses-feature android:name="android.hardware.sip.voip" />. This states that your application uses the SIP API. The declaration should include an android:required attribute that indicates whether you want the application to be filtered from devices that do not offer SIP support. Other <uses-feature>declarations may also be needed, depending on your implementation. For more information, see the documentation for the <uses- feature> element.

If your application is designed to receive calls, you must also define a receiver (BroadcastReceiver subclass) in the application’s manifest:

  • <receiver android:name=".IncomingCallReceiver" android:label="Call Receiver"/>

Here are excerpts from the SipDemo manifest:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
          package="com.example.android.sip">
  ...
     <receiver android:name=".IncomingCallReceiver" android:label="Call Receiver"/>
  ...
  <uses-sdk android:minSdkVersion="9" />
  <uses-permission android:name="android.permission.USE_SIP" />
  <uses-permission android:name="android.permission.INTERNET" />
  ...
  <uses-feature android:name="android.hardware.sip.voip" android:required="true" />
  <uses-feature android:name="android.hardware.wifi" android:required="true" />
  <uses-feature android:name="android.hardware.microphone" android:required="true" />
</manifest>

Creating a SipManager


To use the SIP API, your application must create a SipManager object. The SipManager takes care of the following in your application:

  • Initiating SIP sessions.
  • Initiating and receiving calls.
  • Registering and unregistering with a SIP provider.
  • Verifying session connectivity.

You instantiate a new SipManager as follows:

public SipManager mSipManager = null;
...
if(mSipManager == null) {
    mSipManager = SipManager.newInstance(this);
}

Registering with a SIP Server


A typical Android SIP application involves one or more users, each of whom has a SIP account. In an Android SIP application, each SIP account is represented by a SipProfile object.

SipProfile defines a SIP profile, including a SIP account, and domain and server information. The profile associated with the SIP account on the device running the application is called the local profile. The profile that the session is connected to is called the peer profile. When your SIP application logs into the SIP server with the local SipProfile, this effectively registers the device as the location to send SIP calls to for your SIP address.

This section shows how to create a SipProfile, register it with a SIP server, and track registration events.

You create a SipProfile object as follows:

public SipProfile mSipProfile = null;
...

SipProfile.Builder builder = new SipProfile.Builder(username, domain);
builder.setPassword(password);
mSipProfile = builder.build();

The following code excerpt opens the local profile for making calls and/or receiving generic SIP calls. The caller can make subsequent calls through mSipManager.makeAudioCall. This excerpt also sets the actionandroid.SipDemo.INCOMING_CALL, which will be used by an intent filter when the device receives a call (seeSetting up an intent filter to receive calls). This is the registration step:

Intent intent = new Intent();
intent.setAction("android.SipDemo.INCOMING_CALL");
PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 0, intent, Intent.FILL_IN_DATA);
mSipManager.open(mSipProfile, pendingIntent, null);

Finally, this code sets a SipRegistrationListener on the SipManager. This tracks whether the SipProfilewas successfully registered with your SIP service provider:

mSipManager.setRegistrationListener(mSipProfile.getUriString(), new SipRegistrationListener() {

public void onRegistering(String localProfileUri) {
    updateStatus("Registering with SIP Server...");
}

public void onRegistrationDone(String localProfileUri, long expiryTime) {
    updateStatus("Ready");
}

public void onRegistrationFailed(String localProfileUri, int errorCode,
    String errorMessage) {
    updateStatus("Registration failed.  Please check settings.");
}

When your application is done using a profile, it should close it to free associated objects into memory and unregister the device from the server. For example:

public void closeLocalProfile() {
    if (mSipManager == null) {
       return;
    }
    try {
       if (mSipProfile != null) {
          mSipManager.close(mSipProfile.getUriString());
       }
     } catch (Exception ee) {
       Log.d("WalkieTalkieActivity/onDestroy", "Failed to close local profile.", ee);
     }
}

Making an Audio Call


To make an audio call, you must have the following in place:

  • SipProfile that is making the call (the “local profile”), and a valid SIP address to receive the call (the “peer profile”).
  • SipManager object.

To make an audio call, you should set up a SipAudioCall.Listener. Much of the client’s interaction with the SIP stack happens through listeners. In this snippet, you see how the SipAudioCall.Listener sets things up after the call is established:

SipAudioCall.Listener listener = new SipAudioCall.Listener() {

   @Override
   public void onCallEstablished(SipAudioCall call) {
      call.startAudio();
      call.setSpeakerMode(true);
      call.toggleMute();
         ...
   }

   @Override
   public void onCallEnded(SipAudioCall call) {
      // Do something.
   }
};

Once you’ve set up the SipAudioCall.Listener, you can make the call. The SipManager methodmakeAudioCall takes the following parameters:

  • A local SIP profile (the caller).
  • A peer SIP profile (the user being called).
  • SipAudioCall.Listener to listen to the call events from SipAudioCall. This can be null, but as shown above, the listener is used to set things up once the call is established.
  • The timeout value, in seconds.

For example:

 call = mSipManager.makeAudioCall(mSipProfile.getUriString(), sipAddress, listener, 30);

Receiving Calls


To receive calls, a SIP application must include a subclass of BroadcastReceiver that has the ability to respond to an intent indicating that there is an incoming call. Thus, you must do the following in your application:

  • In AndroidManifest.xml, declare a <receiver>. In SipDemo, this is <receiver android:name=".IncomingCallReceiver" android:label="Call Receiver"/>.
  • Implement the receiver, which is a subclass of BroadcastReceiver. In SipDemo, this isIncomingCallReceiver.
  • Initialize the local profile (SipProfile) with a pending intent that fires your receiver when someone calls the local profile.
  • Set up an intent filter that filters by the action that represents an incoming call. In SipDemo, this action isandroid.SipDemo.INCOMING_CALL.

Subclassing BroadcastReceiver

To receive calls, your SIP application must subclass BroadcastReceiver. The Android system handles incoming SIP calls and broadcasts an “incoming call” intent (as defined by the application) when it receives a call. Here is the subclassed BroadcastReceiver code from SipDemo. To see the full example, go to SipDemo sample, which is included in the SDK samples. For information on downloading and installing the SDK samples, see Getting the Samples.

/*** Listens for incoming SIP calls, intercepts and hands them off to WalkieTalkieActivity.
 */
public class IncomingCallReceiver extends BroadcastReceiver {
    /**
     * Processes the incoming call, answers it, and hands it over to the
     * WalkieTalkieActivity.
     * @param context The context under which the receiver is running.
     * @param intent The intent being received.
     */
    @Override
    public void onReceive(Context context, Intent intent) {
        SipAudioCall incomingCall = null;
        try {
            SipAudioCall.Listener listener = new SipAudioCall.Listener() {
                @Override
                public void onRinging(SipAudioCall call, SipProfile caller) {
                    try {
                        call.answerCall(30);
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                }
            };
            WalkieTalkieActivity wtActivity = (WalkieTalkieActivity) context;
            incomingCall = wtActivity.mSipManager.takeAudioCall(intent, listener);
            incomingCall.answerCall(30);
            incomingCall.startAudio();
            incomingCall.setSpeakerMode(true);
            if(incomingCall.isMuted()) {
                incomingCall.toggleMute();
            }
            wtActivity.call = incomingCall;
            wtActivity.updateStatus(incomingCall);
        } catch (Exception e) {
            if (incomingCall != null) {
                incomingCall.close();
            }
        }
    }
}

Setting up an intent filter to receive calls

When the SIP service receives a new call, it sends out an intent with the action string provided by the application. In SipDemo, this action string is android.SipDemo.INCOMING_CALL.

This code excerpt from SipDemo shows how the SipProfile object gets created with a pending intent based on the action string android.SipDemo.INCOMING_CALL. The PendingIntent object will perform a broadcast when the SipProfile receives a call:

public SipManager mSipManager = null;
public SipProfile mSipProfile = null;
...

Intent intent = new Intent(); 
intent.setAction("android.SipDemo.INCOMING_CALL"); 
PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 0, intent, Intent.FILL_IN_DATA); 
mSipManager.open(mSipProfile, pendingIntent, null);

The broadcast will be intercepted by the intent filter, which will then fire the receiver (IncomingCallReceiver). You can specify an intent filter in your application’s manifest file, or do it in code as in the SipDemo sample application’s onCreate() method of the application’s Activity:

public class WalkieTalkieActivity extends Activity implements View.OnTouchListener {
...
    public IncomingCallReceiver callReceiver;
    ...

    @Override
    public void onCreate(Bundle savedInstanceState) {       IntentFilter filter = new IntentFilter();
       filter.addAction("android.SipDemo.INCOMING_CALL");
       callReceiver = new IncomingCallReceiver();
       this.registerReceiver(callReceiver, filter);
       ...
    }
    ...
}

Testing SIP Applications


To test SIP applications, you need the following:

  • A mobile device that is running Android 2.3 or higher. SIP runs over wireless, so you must test on an actual device. Testing on AVD won’t work.
  • A SIP account. There are many different SIP providers that offer SIP accounts.
  • If you are placing a call, it must also be to a valid SIP account.

To test a SIP application:

  1. On your device, connect to wireless (Settings > Wireless & networks > Wi-Fi > Wi-Fi settings)
  2. Set up your mobile device for testing, as described in Developing on a Device.
  3. Run your application on your mobile device, as described in Developing on a Device.
  4. If you are using Eclipse, you can view the application log output in Eclipse using LogCat (Window > Show View > Other > Android > LogCat).

Reference : http://developer.android.com/guide/topics/connectivity/sip.html

Install and Configure Rhodes Application in Window x64 for cross device

If you’re running Windows, ensure that Java Development Kit is installed. The Sun JDK for Windows is available here.

Download the latest RhoStudio for Windows and run the download file to install RhoStudio.

If you use Symantec Antivirus it can warn about “Suspicious.MLApp” security risk in rubyw.exe file during installation. It is known false positive in Symantec antivirus.
Ignore this warning.
Use Motorola RhoStudio 64-bit to run RhoStudio with x64 JDK.
Use Motorola RhoStudio 32-bit to run RhoStudio with x86 JDK.

Setting path to the Java Development Kit

In RhoStudio Preferences, open RhoMobile and check that the path is set to your JDK installation.

Generating a Rhodes Application

You also can use RhoStudio to generate a RhoConnect application and source adapter. You can see an example in the RhoConnect tutorial.

In RhoStudio, select File->New->Project…

The New Project window opens. Select the RhoMobile application wizard and click the Next button.

Enter the name for your Rhodes application in Project name; in this case, “storemanager”. You may specify a specific folder for your destination where your project is stored, by default, the destination is your RhoStudio workspace folder. Uncheck the RhoElements checkbox if this is to be a strictly Rhodes application. Then press the Finish button.

After pressing the Finish button, you’ll see the Rhodes app generator script output in the output console (Rhomobile build console).

Generating a Rhodes Model

Rhodes applications support a Model-View-Controller (MVC) pattern.  To start our application, we will generate a Model. To generate a Rhodes model and create the associated Controller and View templates, right-click on the application project in the Project Explorer and select New->Rhodes model.

In the Model Information window, enter the name for your model: in this case, Product. Also enter the Model attributes as a string with no spaces and each attribute separated by a comma: in this case, name,brand,price,quantity,sku. (Whitespaces at the field name beginning and end will be trimmed and whitespaces in the middle of the field name will be replaced with an underscore character.)

Click the Finish button to create the model.

After pressing the Finish button, you’ll see the Rhodes model generator script output in the output console (Rhodes build log console).

You should now see a ‘Product’ folder below the ‘app’ folder in your storemanager application. These files constitute the Model, Views and Controller file for the Product Model we just created. The files are organized as follows:

  • product.rb –> This is the Model file which contains the Model definition. Since we are using the default PropertyBag definition, we don’t need to modify this file any further.
  • product_controller.rb –> This file contains the business logic which relates to our Model.
  • *.erb –> The .erb files are the html view template files. We’ll be modifying them in the next section.

Editing Rhodes Views

You may edit the generated ERB files to customize the HTML as you see fit. Typically you will provide links to the model index page from the home screen. In order to accomplish this, a modification needs to be made to the default view for the application, called index.erb. Below is the content for the StoreManager app’s generated top level index.erb file (app/index.erb). Open this file for editing.

Storemanager

<% if SyncEngine::logged_in > 0 %>

Sync

Logout

<% else %>
Login
<% end %>

To provide a link to the Product model’s index page and templates, replace the list item with the title ‘Add link here’ with the following:

<li>
    <a href="Product">
        <span class="title">Products</span>
    </a>
</li>

This change now means that when the index.erb view is displayed (the default view when the app starts), you will see a UI element called “Products” that will take you to the controller for the “Product” Model definition. Because no specific action is provided, the controller will default to displaying the Model’s index page, in this case the Product model’s index page. All further functionality in the app is carried out by the default scaffolding of the generated controller and view files. These generated files provide basic CRUD (Create, Read, Update, Delete) functionality for your Model.

You can edit the top level app page or any of the other view templates with any HTML you wish. We don’t attempt to teach you HTML or Ruby here, but there are many good external references for both topics.

Managing Your Build Configuration

The build.yml file for your application manages your Rhodes build time configuration. Double click on ‘build.yml’ item in your project tree to open the build.yml editor. In the editor you’ll see two tabs: Rhobuild setting for the WYSIWIG editor, and build.yml for the text editor.

Rhobuild setting WYSIWIG editor:

build.yml text editor:

You will have default values for your application name, log file (the log file will be located in your application folder after you run your application), and the path to the Rhodes gem. You can use the WYSIWIG editor to change this. For example, you might want to point to a different RHodes gem if you have more than one Rhodes gem installed.

You can add capabilities to your application, such as camera and vibrate, by pressing the Capabilities: Add button, selecting the capabilities from the popup window, and clicking the Ok button.

The selected application capabilities will appear in the Capabilities: text field in the WISIWIG editor.

Building and Running Your Application

To start the build process, create a Run Configuration: select your project in the Project Explorer, and select Run->Run Configurations… from the menu. The Run Configurations window appears.

To create a new build configuration for your application, select Rhodes Application. Then either right-click on Rhodes Application or click the New button. A new configuration appears under Rhodes Application.

Building and Running with RhoSimulator

You can run your project in RhoSimulator, a simulator type available only in RhoStudio. RhoSimulator allows you to build and run your project on the platform of your choice without having to install the SDK for that platform. Instead, RhoSImulator will mimic the platform. RhoSimulator builds and runs your application quickly, making it useful for testing and debugging.

Once you have done your application development and debugging, you will need to install the SDK to create a build for that device. Or you can upload your project to RhoHub, and use RhoHub to create the build for your device.

To run RhoSimulator for the iPhone platform, select your project from the Project Explorer (in this case, we use the storemanager project created earlier). Then select Run –> Run Configurations from the main menu. The Run Configurations window appears.

In the Run Configurations window, select RhoSimulator for the Simulator type, and select your desired platform type. For example, if you select iPhone, RhoSimulator will mimic an iPhone. The calls to the system will return as though the application was running on an actual iPhone.

Click Run to start the simulator.

In the storemanager example, we can add a couple of products. Click the Products link, then click New, enter the product attributes, and click Create.

Using the Web Inspector

The RhoSimulator also brings up a Webkit Inspector window, allowing you to inspect the web interface for your application.

For example, you can see the web code for the listview. Here is the listview of the products for the storemanager example. You can see the link for the iPhone URL, and you could change the border color and thickness for the iPhone product (changing from 1px to 3px and solid #CCC to #111 would give the iPhone link a thicker, darker border).

Using the Debugger

You can use the RhoStudio debugger to debug the Ruby code in your RHodes or Rhoconnect application.

Debugging a Rhodes Application

To run the RhoSimulator debugger for the iPhone platform, select your Rhodes project from the Project Explorer (in this case, we use the storemanager project created earlier). Then select Run –> Debug Configurations from the main menu. The Debug Configurations window appears.

Click Debug to start the debugger.

You will again get a RhoSimulator showing your device, and the Web Inspector window. And you get a Debug view in your workspace.

To debug your Ruby code in your Rhodes application, click the RhoStudio tab in your workspace, then open the Ruby file in your Rhodes application that you wish to debug.

Now click the Debug tab. You will see the Ruby file opened in the debugger.

Debugging a RhoConnect Application

To run the debugger for RhoConnect, select your RhoConnect project from the Project Explorer. Then select Run –> Debug Configurations from the main menu. The Debug Configurations window appears.

Click Debug to start the debugger.

You will see your RhoConnect project in the debugger; there will be a console window with redis server information messages. (You will not get a RhoSimulator or a Web Inspector, as you do with the Rhodes debugger.)

To debug your Ruby code in your RhoConnect application, you do the same as for a Rhodes application: click the RhoStudio tab in your workspace, then open the Ruby file in your RhoConnect project that you wish to debug.

Setting a Breakpoint

You can perform operations such as setting a breakpoint: double-click in the left margin at the line of code where you want the breakpoint.

For example, in the storemanager code, there is a rendering section in product_controller.rb.

# GET /Product
def index
  @products = Product.find(:all)
  render :back => '/app'
end

You can set a breakpoint on the line render :back => '/app', then you can go to the RhoSimulator and click on Product. The simulator will stop at the point of rendering.

If you set a breakpoint on a function definition string (the first line of a function definition), as in the following line of Ruby code, that breakpoint will always be disabled. You should not set breakpoints there.
def some_method(value='default', arr=[])

Inspecting the Variables

You can also inspect the variables. When you click “Products” In the debugger simulator, and you have a breakpoint as set earlier in the product_controller.rb on the rendering line, the application will pause just before the rendering of the product listing page. You can then inspect the variables in the variable window. The storemanager example shows that at this point int he app, the local database has been read, showing that a couple of products have been created (an iPhone and an iPhone4S).

Debugger Support for Extensions

The RhoStudio debugger supports extensions, but only pure Ruby extensions or the Ruby part of a native extension.

Debugging Ruby Framework Code

The debugger supports debugging Ruby framework code. With an older project, you may need to recreate the project in RhoStudio in order to enable the debugger. A good way to do this is as follows.

  1. Delete the project in the Project Explorer view without deleting the project content.
  2. Create a new project by creating from existing sources in the RhoMobile application wizard.
  3. Use your old project content as the existing source.

Importing a Rhodes Project from a non-RhoStudio Source

You can have a Rhodes project that you wish to import into RhoStudio. For example, you might have created a project using RhoHub and you have a local repository of that project on your computer). Or you might want to import the sample projects that came with the RhoStudio installer. The import process is similar to importing external projects with standard Eclipse.

Select File –> New –> Project…

From the New Project window, select Rhomobile –> Rhoconnect application or Rhomobile –> Rhodes application. THen click the Next button.

From the Rhodes (or Rhoconnect) application generator wizard window, click the “Create application from existing sources” checkbox.

Click the Browse button, then navigate to and select the folder containing your project. The Project name will change to the name of your project folder.

Click the Finish button. Your project will appear in the Project Explorer.

Creating a Device Build in RhoStudio

Once you have developed your application using the RhoSimulator and debugger, you will need to create a device build of your application. To do this with RhoStudio, you can do this by installing SDKs.

To create a device build with RhoStudio, you need to do the following:

  • Install the SDK for your application’s platform.
  • Set RhoStudio Preferences to that platform’s SDK.
  • In Run Configurations, set to that device, then do the device build and run.
You can do the device build with RhoHub if you wish to avoid installing the SDKs on your computer. To do your device build on RhoHub, upload your application into RhoHub, then use RhoHub to create the device build. Instructions for using RhoHub are in the Rhohub tutorial.

Installing the SDK

On Macintosh computers, once you have installed Xcode, RhoStudio will know the location of the iOS SDK. You do not set any locations.

On other platforms, such as Android, you need to install the SDK, and then set RhoStudio Preferences for that platform’s SDK. For example, on an Android device, you must download and install:

Set RhoStudio SDK Preferences

Once you have the SDK for your platform installed, set RhoStudio Preferences for that SDK. For example, to do Android device builds:

  1. In the Preferences window, open the Rhomobile item and select Android.
  2. Click the Browse button and navigate to the locations where you installed your Android SDK and NDK.

Set Run Configurations to the Device Build and Run

To start the device build process, create a Run Configuration: select your project in the Project Explorer, and select Run->Run Configurations… from the menu. The Run Configurations window appears. Set Run Configurations as follows:

  • Set the Platform to Android.
  • Set the Simulator type to either device to run on the device, or simulator to run on the simulator provided with the Android SDK.
  • If you set the Simulator type to simulator, set the Platform version number and set the AVD name for Android.

Press the Run button to build and run your application. The build output will appear in the Rhodes build output console. The application log will be available in the Rhodes application output console, and will be written into your application folder.

For complete instructions on setting up the SDKs in RhoStudio for each platform, refer to the Build instructions for Rhodes.

Editing the SDK Locations in rhobuild.yml

Use RhoStudio preferences to edit the rhobuild.yml file, which manages the location(s) of the platform SDKs/JDKs that are used to build your Rhodes application. The rhobuild.yml file is located in the Rhodes gem folder (or, if you copied Rhodes from git, in the Rhodes source code folder).

Using Intellisense for Autocompletion

IntelliSense is Microsoft’s implementation of autocompletion, best known for its use in the Microsoft Visual Studio integrated development environment. In addition to completing the symbol names the programmer is typing, IntelliSense serves as documentation and disambiguation for variable names, functions and methods using reflection.

The RhoStudio DLTK implementation supports aspects of Intellisense, such as Ruby keyword completion and block argument name completion. You can use Intellisense in RhoStudio to autocomplete your Ruby code by setting the cursor at a location in your code and pressing Ctrl+Space.

For example, to complete a Ruby keyword beginning with “re”, perform these steps.

  1. Enter “re”.
  2. Press Ctrl+Space. The completion list includes “rescue” and “return”.

Checking Progress and Canceling an Operation

You can check the progress of an operation in RhoStudio, and cancel it if you wish. Open the Progress view to see the progress bar on the left, and click the red square on the right to cancel the operation.

The following RhoStudio operations can have their progress checked and be canceled from the Progress view.

  • Main menu: RhoMobile –> Production build
  • Main menu: Project –> Clean…
  • RhoMobile Application in Project Explorer: Run Configurations, RhoMobile Application
  • RhoMobile Application in Project Explorer: Run Configurations, RhoMobile Application Test
  • RhoConnect Application in Project Explorer: Run Configurations, RhoConnect Application

You can also check progress and perform a cancel for the following operations on existing applications.

  • Create a new project, and then select RhoMobile->RhoMobile application from the New Project window.
  • Create a new project, and then select RhoMobile->RhoConnect application from the New Project window.
  • Select a RhoMobile application in the Project Explorer, and create a New->RhoMobile model.
  • Select a RhoMobile application in the Project Explorer, and create a New->RhoMobile extension.
  • Select a RhoConnect application in the Project Explorer, and create a New->RhoConnect source adapter.

Creating a Production Build in RhoStudio

To create a production build for your application, perform these steps.

  1. Select your application in the Project Explorer.
  2. Select RhoMobile->Production Build from the main menu.
  3. From the Select platform pop-up box, select the platform to which you wish to build and click “Ok”.

Reference : http://docs.rhomobile.com/rhostudio.tutorial#managing-your-build-configuration

PhoneGap for Android Native App

PhoneGap is an open source framework for quickly building cross-platform mobile apps using HTML5, Javascript and CSS.

Building applications for each device–iPhone, Android, Windows Mobile and more–requires different frameworks and languages. PhoneGap solves this by using standards-based web technologies to bridge web applications and mobile devices. Since PhoneGap apps are standards compliant, they’re future-proofed to work with browsers as they evolve.

Why PhoneGap?
Mobile app is next regeneration of development work. There are several Mobile OS currently available in the world competition market; Android, IOS, Bada, Blackberry, Symbian, WebOS and Window Phone.

To build an application each platform is required an adequate knowledge and skill for specific platform.
For example, if you want to build application for Android Platform you might have enough knowledge skill on Java. Also for IOS you might need a strong understanding on Objective-C and Coca framwork to applications for IOS platform…etc.

With PhoneGap does not require extra skill that you have already learned since Bachelor degree, they are HTML, Javascript and CSS. It can compile to a native application of your choices of platform.

Practice.
The practice here I will show about developing native app in Android with Eclipse.
Download PhoneGap here : http://www.phonegap.com


After download Eclipse and install Android SDK.
1.) Prepare the file structure as the following image.

2.) Write some few code in the main activity as below:


Here is a result:

Finally it is done. It is simple, right?
You can try yourself for IOS or Window Phone and other platform.