What is Swagger for Web API?

Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability.

We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.

Now in version 2.0, Swagger is more enabling than ever. And it’s 100% open source software.

Demo Site:

http://petstore.swagger.io/

 

Reference: http://swagger.io/

Configure GlassFish 4.1 with JAVA 8 in Ubuntu 15.04

GlassFish is an open source application server for the development and deployment of Java Platform, Enterprise Edition (Java EE platform) applications and web technologies based on Java technology. It supports different Java based technologies like Enterprise JavaBeans, JPA, JavaServer Faces, JMS, RMI, JavaServer Pages, servlets and more. Glassfish provides a lightweight and extensible core based on OSGi Alliance standards with a web container. For ther configuration and management, it has a very good easy-to-use administration console with update tool for updates and add-on components. Glassfish has a good support for high availability clustering and load balancing.

Now, we’ll go for installing Glassfish in Ubuntu 15.04 with pretty easy steps.

1. Adding Java PPA

First of all, we’ll need to install Oracle JDK 8. As Oracle Java is not available in the repository of Ubuntu, we’ll need to add a PPA for the access of the Oracle java 8 installer. So, we’ll first install python-software-properties if not installed and add PPA into our Ubuntu 15.04 machine.

# apt-get install python-software-properties

Reading package lists… Done
Building dependency tree
Reading state information… Done
The following extra packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-apt python-minimal
python-pycurl python2.7 python2.7-minimal
Suggested packages:
python-doc python-tk python-apt-dbg python-gtk2 python-vte python-apt-doc
libcurl4-gnutls-dev python-pycurl-dbg python-pycurl-doc python2.7-doc binutils
binfmt-support
The following NEW packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-apt python-minimal
python-pycurl python-software-properties python2.7 python2.7-minimal
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,126 kB of archives.
After this operation, 17.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Now, we’ll add the ppa for Java using add-apt-repository command as shown below.

# add-apt-repository ppa:webupd8team/java

Oracle Java (JDK) Installer (automatically downloads and installs Oracle JDK7 / JDK8 / JDK9). There are no actual Java files in this PPA.

More info (and Ubuntu installation instructions):
– for Oracle Java 7: http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html
– for Oracle Java 8: http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html

Debian installation instructions:
– Oracle Java 7: http://www.webupd8.org/2012/06/how-to-install-oracle-java-7-in-debian.html
– Oracle Java 8: http://www.webupd8.org/2014/03/how-to-install-oracle-java-8-in-debian.html

Important!!! For now, you should continue to use Java 8 because Oracle Java 9 is available as an early access release (it should be released in 2016)! You should only use Oracle Java 9 if you explicitly need it, because it may contain bugs and it might not include the latest security patches! Also, some Java options were removed in JDK9, so you may encounter issues with various Java apps. More information and installation instructions (Ubuntu / Linux Mint / Debian): http://www.webupd8.org/2015/02/install-oracle-java-9-in-ubuntu-linux.html
More info: https://launchpad.net/~webupd8team/+archive/ubuntu/java
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpahw0r1nh/secring.gpg’ created
gpg: keyring `/tmp/tmpahw0r1nh/pubring.gpg’ created
gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpahw0r1nh/trustdb.gpg: trustdb created
gpg: key EEA14886: public key “Launchpad VLC” imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

After adding the PPA repository, we’ll want to update the local package repository index. To do so, we’ll need to run the following command.

# apt-get update

2. Installing Oracle JDK 8

After updating the repository index, we’ll want to install Oracle JDK 8 by running the following command.

# apt-get install oracle-java8-installer

Reading package lists… Done
Building dependency tree
Reading state information… Done
The following extra packages will be installed:
binutils gsfonts gsfonts-x11 java-common libfontenc1 libxfont1 x11-common xfonts-encodings
xfonts-utils
Suggested packages:
binutils-doc default-jre equivs binfmt-support visualvm ttf-baekmuk ttf-unfonts
ttf-unfonts-core ttf-kochi-gothic ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho
ttf-arphic-uming firefox firefox-2 iceweasel mozilla-firefox iceape-browser
mozilla-browser epiphany-gecko epiphany-webkit epiphany-browser galeon midbrowser
moblin-web-browser xulrunner xulrunner-1.9 konqueror chromium-browser midori google-chrome
The following NEW packages will be installed:
binutils gsfonts gsfonts-x11 java-common libfontenc1 libxfont1 oracle-java8-installer
x11-common xfonts-encodings xfonts-utils
0 upgraded, 10 newly installed, 0 to remove and 22 not upgraded.
Need to get 6,579 kB of archives.
After this operation, 20.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

3.  Setting “JAVA_HOME” Variable

Now, after installing the Oracle JDK 8, we’ll now want to set the environment variable “JAVA_HOME” as the path of the newly installed Oracle JDK 8. To set the variable, we’ll need to edit /etc/environment file using our favorite text editor.

# nano /etc/environment

After opening with the text editor, we’ll need to add the following line into the bottom of the file.

JAVA_HOME=”/usr/lib/jvm/java-8-oracle”

Once, the line is added, we’ll need to reload file.

# source /etc/environment

After installing and setting the Oracle JDK 8, we’ll run the following command to check and confirm.

# java -version

java version “1.8.0_45”
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

If we see the output as shown above, it is confirmed that we have Java 8 installed in our machine.

4. Installing GlassFish 4.1

After our Java is installed correctly, we’ll now march towards installing Glassfish 4.1 which is the latest version till date. We can even download the older versions from the GlassFish official download page https://glassfish.java.net/download.html .

# cd /tmp
# wget ‘http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip’

–2015-05-26 05:53:22– http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip
Resolving download.java.net (download.java.net)… 137.254.120.26
Connecting to download.java.net (download.java.net)|137.254.120.26|:80… connected.
HTTP request sent, awaiting response… 302 Moved Temporarily
Location: http://dlc-cdn.sun.com/glassfish/4.1/release/glassfish-4.1.zip [following]
–2015-05-26 05:53:22– http://dlc-cdn.sun.com/glassfish/4.1/release/glassfish-4.1.zip
Resolving dlc-cdn.sun.com (dlc-cdn.sun.com)… 23.0.160.207, 23.0.160.198
Connecting to dlc-cdn.sun.com (dlc-cdn.sun.com)|23.0.160.207|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 107743725 (103M) [application/zip]
Saving to: ‘glassfish-4.1.zip’

glassfish-4.1.zip 100%[===============================>] 102.75M 78.7MB/s in 1.3s

2015-05-26 05:53:23 (78.7 MB/s) – ‘glassfish-4.1.zip’ saved [107743725/107743725]

Now, we’ll want to extract the downloaded zip package of the latest GlassFish 4.1 . To do that, we’ll need to install unzip and then extract the package into /opt directory.

# apt-get install unzip
# unzip glassfish-4.1.zip -d /opt

5. Setting GlassFish PATH

Now, we’ll want to setup PATH variable for GlassFish so that the GlassFish executable files will be accessible directly from any directory. To do that, we’ll edit ~/.profile file and add the PATH to the directory where GlassFish is extracted.

# nano ~/.profile

Then add the following lines into it.

export PATH=/opt/glassfish4/bin:$PATH

# source ~/.profile

6. Starting GlassFish server

Finally, after installing Oracle Java 8 and GlassFish 4.1 in our Ubuntu 15.04 machine. We’ll want to start the GlassFish server. To do so, we’ll run asadmin as follows.

# asadmin start-domain

Waiting for domain1 to start …………
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.

A domain is a set of one or more GlassFish Server instances managed by one administration server. The default GlassFish Server’s port is 8080 and administration server’s port is 4848 with the administration user name as admin with no password. We can visit http://ip-address:8080/ to check the homepage of GlassFish Server and http://ip-address:4848/ to get the admin login page in our web browser.

GlassFish Home Page

GlassFish Login

7. Enabling Secure Admin

Now, inorder to access the administration panel remotely via webpage, we’ll need to enable secure adminusing asadmin by running the following command.

# asadmin enable-secure-admin

Enter admin user name> admin
Enter admin password for user “admin”>
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.

This will ask us the username and password we want to set.

Note: If you get this error “remote failure: At least one admin user has an empty password, which secure admin does not permit. Use the change-admin-password command or the admin console to create non-empty passwords for admin accounts.” you’ll need to run as admin change-admin-password and enter a new password for the admin then retry above command.

# asadmin change-admin-password

Enter admin user name [default: admin]>admin
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Authentication failed for user: admin (Usually, this means invalid user name and/or password)
Command change-admin-password failed.

After setting, we’ll need to restart the domain.

# asadmin restart-domain

Successfully restarted the domain
Command restart-domain executed successfully.

After enabling the secure admin, we are able to access the administration panel by pointing our web browser to http://ip-address:4848 . Then, access the admin panel by entering the credentials entered above.

GlassFish Administration Panel

8. Deploying WAR on GlassFish

Now, after we have successfully installed GlassFish and running the server, we’ll want to deploy a WAR application into the GlasFish. Here, in this tutorial we’ll deploy hello.war for the test of the server. So, first we’ll download hello.war from the official sample page of GlassFish using wget command.

# wget https://glassfish.java.net/downloads/quickstart/hello.war

–2015-05-26 06:46:19– https://glassfish.java.net/downloads/quickstart/hello.war
Resolving glassfish.java.net (glassfish.java.net)… 137.254.56.48
Connecting to glassfish.java.net (glassfish.java.net)|137.254.56.48|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 4102 (4.0K) [text/plain]
Saving to: ‘hello.war’

hello.war 100%[===============================>] 4.01K –.-KB/s in 0s

2015-05-26 06:46:19 (36.7 MB/s) – ‘hello.war’ saved [4102/4102]

After downloading the war file, we’ll now deploy the war file using asadmin command.

# asadmin deploy hello.war

Enter admin user name> admin
Enter admin password for user “admin”>
Application deployed with name hello.
Command deploy executed successfully.

This will ask us to enter the username and password for the application deployment.

As the war application has been deployed, we can check it by visiting http://ip-address:8080/hello using our web browser.

GlassFish Hello Deploy

9. Undeploying and Stopping Server

Now, if we have done our task with the GlassFish Server and the deployed application, we can simply undeploy the application and stop the GlassFish server.

To undeploy a running application, we can simply run asadmin undeploy with the application name we want to undeploy.

# asadmin undeploy hello

Enter admin user name> admin
Enter admin password for user “admin”>
Command undeploy executed successfully.

To stop the running GlassFish domain, we can simply run asadmin stop-domain .

# asadmin stop-domain

Waiting for the domain to stop .
Command stop-domain executed successfully.

Creating a password file

If you are tired of entering the username and password everytime you deploy or undeploy an application, you can simply create a file named pwdfile with a text editor and add the following lines into it.

# nano pwdfile

AS_ADMIN_PASSWORD=your_admin_password

Now, after that file is created, we can just add –passwordfile flag pointing the pwdfile and then deploy the war application as shown below.

# asadmin –passwordfile pwdfile deploy hello.war

Application deployed with name hello.
Command deploy executed successfully.

Now, the prompt for username and password won’t appear further.

Conclusion

GlassFish is an awesome open source application server that implements Java EE. We can install GlassFish with different methods like ZIP Package, Self-Extracting Bundle and Full Platform or Web Profile Distribution. Here, in this tutorial we’ve used full platform with zip package. The latest GlassFish version 4.1 includes new support for Java API for JSON Processing (JSON-P) 1.0, Java API for WebSocket 1.1, Batch Applications for the Java Platform 1.0, Concurrency Utilities for Java EE 1.0, Java Message Service (JMS) 2.0, Java API for RESTful Web Services (JAX-RS) 2.0 and many updated JAVA EE Standards. GlassFish has made the deployment of war java application very fast, secure and easy. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy 🙂

Reference: http://linoxide.com/ubuntu-how-to/setup-glassfish-4-1-java-8-ubuntu-15-04/

SMS and email Two-Factor Authentication in ASP.NET MVC 5

Create an ASP.NET  MVC app

Start by installing and running Visual Studio Express 2013 for Web or  Visual Studio 2013 .  Install Visual Studio 2013 Update 3 or higher.

Warning: You should complete Create a secure ASP.NET MVC 5 web app with log in, email confirmation and password reset before proceeding. You must install Visual Studio 2013 Update 3 or higher to complete this tutorial.
  1. Create a new ASP.NET Web project and select the MVC template. Web Forms also supports ASP.NET Identity, so you could follow similar steps in a web forms app.
  2. Leave the default authentication as Individual User Accounts. If you’d like to host the app in Azure, leave the check box checked. Later in the tutorial we will deploy to Azure. You can open an Azure account for free .
  3. Set the project to use SSL .

Set up SMS for Two-factor authentication

This tutorial provides instructions for using either Twilio or ASPSMS but you can use any other SMS provider.

  1. Creating a User Account with an SMS provider

    Create a Twilio or an ASPSMS account.

  2. Installing additional packages or adding service references

    Twilio:
    In the Package Manager Console, enter the following command:
    Install-Package Twilio

    ASPSMS:
    The following service reference needs to be added:

    Address:
    https://webservice.aspsms.com/aspsmsx2.asmx?WSDL

    Namespace:
    ASPSMSX2

  3. Figuring out SMS Provider User credentials

    Twilio:
    From the Dashboard tab of your Twilio account, copy the Account SID and Auth token.

    ASPSMS:
    From your account settings, navigate to Userkey and copy it together with your self-defined Password.

    We will later store these values in the web.config file within the keys "SMSAccountIdentification" and"SMSAccountPassword".

  4. Specifying SenderID / Originator

    Twilio:
    From the Numbers tab, copy your Twilio phone number.

    ASPSMS:
    Within the Unlock Originators Menu, unlock one or more Originators or choose an alphanumeric Originator (Not supported by all networks).

    We will later store this value in the web.config file within the key "SMSAccountFrom".

  5. Transferring SMS provider credentials into app

    Make the credentials and sender phone number available to the app. To keep things simple we will store these values in the web.config file. When we deploy to Azure, we can store the values securely in the app settingssection on the web site configure tab.

    </connectionStrings>
       <appSettings>
          <add key="webpages:Version" value="3.0.0.0" />
          <!-- Markup removed for clarity. -->
          <!-- SendGrid-->
          <add key="mailAccount" value="account" />
          <add key="mailPassword" value="password" />
          <add key="SMSAccountIdentification" value="My Identification" />
          <add key="SMSAccountPassword" value="My Password" />
          <add key="SMSAccountFrom" value="+12065551234" />
       </appSettings>
      <system.web>
    Security Note: Never store sensitive data in your source code. The account and credentials are added to the code above to keep the sample simple. See Best practices for deploying passwords and other sensitive data to ASP.NET and Azure .
  6. Implementation of data transfer to SMS provider

    Configure the SmsService class in the App_Start\IdentityConfig.cs file.

    Depending on the used SMS provider activate either the Twilio or the ASPSMS section:

    public class SmsService : IIdentityMessageService
    {
        public Task SendAsync(IdentityMessage message)
        {
            // Twilio Begin
            // var Twilio = new TwilioRestClient(
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountIdentification"],
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountPassword"]);
            // var result = Twilio.SendMessage(
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountFrom"],
            //   message.Destination, message.Body
            // );
            // Status is one of Queued, Sending, Sent, Failed or null if the number is not valid
            // Trace.TraceInformation(result.Status);
            // Twilio doesn't currently have an async API, so return success.
            // return Task.FromResult(0);
            // Twilio End
    
            // ASPSMS Begin 
            // var soapSms = new MvcPWx.ASPSMSX2.ASPSMSX2SoapClient("ASPSMSX2Soap");
            // soapSms.SendSimpleTextSMS(
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountIdentification"],
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountPassword"],
            //   message.Destination,
            //   System.Configuration.ConfigurationManager.AppSettings["SMSAccountFrom"],
            //   message.Body);
            // soapSms.Close();
            // return Task.FromResult(0);
            // ASPSMS End
        }
    }
  7. Update the Views\Manage\Index.cshtml Razor view: (note: don’t just remove the comments in the exiting code, use the code below.)
    @model MvcPWy.Models.IndexViewModel
    @{
       ViewBag.Title = "Manage";
    }
    <h2>@ViewBag.Title.</h2>
    <p class="text-success">@ViewBag.StatusMessage</p>
    

    Change your account settings


    />
    class="dl-horizontal">
    Password:
    [ @if (Model.HasPassword) { @Html.ActionLink("Change your password", "ChangePassword") } else { @Html.ActionLink("Create", "SetPassword") } ]
    External Logins:
    @Model.Logins.Count [ @Html.ActionLink("Manage", "ManageLogins") ]
    Phone Number:
    @(Model.PhoneNumber ?? "None") [ @if (Model.PhoneNumber != null) { @Html.ActionLink("Change", "AddPhoneNumber") @:  |  @Html.ActionLink("Remove", "RemovePhoneNumber") } else { @Html.ActionLink("Add", "AddPhoneNumber") } ]
    Two-Factor Authentication:
    @if (Model.TwoFactor) { using (Html.BeginForm("DisableTwoFactorAuthentication", "Manage", FormMethod.Post, new { @class = "form-horizontal", role = "form" })) { @Html.AntiForgeryToken() Enabled type="submit" value="Disable" class="btn btn-link" /> } } else { using (Html.BeginForm("EnableTwoFactorAuthentication", "Manage", FormMethod.Post, new { @class = "form-horizontal", role = "form" })) { @Html.AntiForgeryToken() Disabled type="submit" value="Enable" class="btn btn-link" /> } }
  8. Verify the EnableTwoFactorAuthentication and DisableTwoFactorAuthentication action methods in the ManageController have the [ValidateAntiForgeryToken] attribute:
    //
    // POST: /Manage/EnableTwoFactorAuthentication
    [HttpPost,ValidateAntiForgeryToken]
    public async Task<ActionResult> EnableTwoFactorAuthentication()
    {
        await UserManager.SetTwoFactorEnabledAsync(User.Identity.GetUserId(), true);
        var user = await UserManager.FindByIdAsync(User.Identity.GetUserId());
        if (user != null)
        {
            await SignInAsync(user, isPersistent: false);
        }
        return RedirectToAction("Index", "Manage");
    }
    //
    // POST: /Manage/DisableTwoFactorAuthentication
    [HttpPost, ValidateAntiForgeryToken]
    public async Task<ActionResult> DisableTwoFactorAuthentication()
    {
        await UserManager.SetTwoFactorEnabledAsync(User.Identity.GetUserId(), false);
        var user = await UserManager.FindByIdAsync(User.Identity.GetUserId());
        if (user != null)
        {
            await SignInAsync(user, isPersistent: false);
        }
        return RedirectToAction("Index", "Manage");
    }
  9. Run the app and log in with the account you previously registered.
  10. Click on your User ID, which activates the Index action method in Manage controller.
  11. Click Add.
  12. The AddPhoneNumber action method displays a dialog box to enter a phone number that can receive SMS messages.
    // GET: /Account/AddPhoneNumber
    public ActionResult AddPhoneNumber()
    {
       return View();
    }

  13. In a few seconds you will get a text message with the verification code. Enter it and press Submit.
  14. The Manage view shows your phone number was added.

Enable two-factor authentication

In the template generated app, you need to use the UI to enable two-factor authentication (2FA). To enable 2FA, click on your user ID (email alias) in the navigation bar.

Click on enable 2FA.

Log out, then log back in. If you’ve enabled email (see my previous tutorial), you can select the SMS or email for 2FA.

The Verify Code page is displayed where you can enter the code (from SMS or email).

Clicking on the Remember this browser check box will exempt you from needing to use 2FA to log in when using the browser and device where you checked the box. As long as malicious users can’t gain access to your device, enabling 2FA and clicking on the Remember this browser will provide you with convenient one step password access, while still retaining strong 2FA protection for all access from non-trusted devices. You can do this on any private device you regularly use.

 

Reference: http://www.asp.net/mvc/overview/security/aspnet-mvc-5-app-with-sms-and-email-two-factor-authentication

How to Choose A Right Passive UHF RFID Antenna?

We will discuss these factors below to help you better understand UHF RFID passive antennas.

FREQUENCY RANGE

Each country has regulations that specify the frequency ranges for UHF RFID transmissions within that country. The three most prevalent frequency ranges for UHF RFID antennas are:

  • 902-928 MHz (US/FCC)
  • 865-868 MHz (EU/ETSI)
  • 860-960 MHz (Global)

When choosing an RFID antenna, be sure to select the frequency range that is right for your region.

GAIN/BEAMWIDTH

Gain and beamwidth are grouped together because they are both electrical components of an antenna and are distinctly related. The higher the gain, the narrower (or smaller) the beamwidth. Higher gain creates a narrower area of coverage, but the beam will travel a longer distance. Beamwidth and gain are analogous to the beam of a flashlight. Check out the diagram below to see how differences in gain can drastically affect the antenna’s beamwidth.

beam-width

Beamwidth is determined by gain – the higher the gain, the more focused the beam.

The ideal beamwidth and gain will depend on your specific application. If you have many tags a short distance away, then you most likely don’t need a high gain antenna; it would be more advantageous to use a wide beamwidth antenna with relatively low gain as represented by the third image above.

POLARIZATION

Most UHF RFID passive antennas are either linearly or circularly polarized. Linearly polarized antennas send RF waves in a single plane either horizontally or vertically. Circularly polarized antennas send RF waves in a circular motion either clockwise or counterclockwise. When the waves rotate clockwise, the antenna is a left-hand circularly polarized (LHCP) antenna; when the waves rotate counterclockwise, the antenna is a right-hand circularly-polarized antenna (RHCP).

When you have a setup where antennas are facing each other, it’s important to know if you have a LHCP or RHCP antennas. When antennas face each other and emit waves in the same direction, waves will create null zones where the two sides meet. If you choose LHCP and RHCP when you have two antennas facing each other, it creates a more effective read zone then if you use two LHCP antennas.

circularly-polarized-antennas

One exception to the rule above is when using a bistatic system. If you use a bistatic system in a portal arrangement (antennas facing each other), the antenna that transmits the RF wave will need to be the SAME polarization as the antenna that receives the RF wave. So if a LHCP transmits the wave, the antenna that receives the RF wave will need to be LHCP in order to receive it most efficiently.

If all the tags in your application will be read the same orientation and at the same height, then it may be best to use a linearly polarized antenna. The main advantage to circularly polarized antennas is that they are better for applications where you cannot predict tag placement or orientation.

 

Reference: http://blog.atlasrfidstore.com/choose-right-rfid-antenna

 

Create and Populate Date Dimension for Data Warehouse

Date dimension plays an important role in your data warehouse designing, it provides the ability to study behavior and trend of your data over a period of time.

You can study your data by grouping them using various fields of date dimension.

For example:

If I want to analyze my data of total sales by each month of the year, or show total sales by each quarter of the year, or show me on which days total sales takes place more in the entire year or month.

After implementing the complete solution in data warehouse, the relationship of date dimension gives you all this facility to do slice and dice of your data.

So as an initial step, you need to design your date dimension, time dimension and populate them with range of values.

For designing of time dimension, you can refer to my other tip posted on CodeProject, “Design and Populate Time Dimension with 24 Hour plus Values”.

This date dimension will have values of date stored as per various date formats used across the world, like “dd-MM-yyyy” is used in Europe, UK, India, etc. while “MM-dd-yyyy” format is used in US.

 

/********************************************************************************************/
--Specify Start Date and End date here
--Value of Start Date Must be Less than Your End Date 

DECLARE @StartDate DATETIME = '01/01/2013' --Starting value of Date Range
DECLARE @EndDate DATETIME = '01/01/2015' --End Value of Date Range

--Temporary Variables To Hold the Values During Processing of Each Date of Year
DECLARE
	@DayOfWeekInMonth INT,
	@DayOfWeekInYear INT,
	@DayOfQuarter INT,
	@WeekOfMonth INT,
	@CurrentYear INT,
	@CurrentMonth INT,
	@CurrentQuarter INT

/*Table Data type to store the day of week count for the month and year*/
DECLARE @DayOfWeek TABLE (DOW INT, MonthCount INT, QuarterCount INT, YearCount INT)

INSERT INTO @DayOfWeek VALUES (1, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (2, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (3, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (4, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (5, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (6, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (7, 0, 0, 0)

--Extract and assign various parts of Values from Current Date to Variable

DECLARE @CurrentDate AS DATETIME = @StartDate
SET @CurrentMonth = DATEPART(MM, @CurrentDate)
SET @CurrentYear = DATEPART(YY, @CurrentDate)
SET @CurrentQuarter = DATEPART(QQ, @CurrentDate)

/********************************************************************************************/
--Proceed only if Start Date(Current date ) is less than End date you specified above

WHILE @CurrentDate < @EndDate
BEGIN
 
/*Begin day of week logic*/

         /*Check for Change in Month of the Current date if Month changed then 
          Change variable value*/
	IF @CurrentMonth != DATEPART(MM, @CurrentDate) 
	BEGIN
		UPDATE @DayOfWeek
		SET MonthCount = 0
		SET @CurrentMonth = DATEPART(MM, @CurrentDate)
	END

        /* Check for Change in Quarter of the Current date if Quarter changed then change 
         Variable value*/

	IF @CurrentQuarter != DATEPART(QQ, @CurrentDate)
	BEGIN
		UPDATE @DayOfWeek
		SET QuarterCount = 0
		SET @CurrentQuarter = DATEPART(QQ, @CurrentDate)
	END
       
        /* Check for Change in Year of the Current date if Year changed then change 
         Variable value*/
	

	IF @CurrentYear != DATEPART(YY, @CurrentDate)
	BEGIN
		UPDATE @DayOfWeek
		SET YearCount = 0
		SET @CurrentYear = DATEPART(YY, @CurrentDate)
	END
	
        -- Set values in table data type created above from variables 

	UPDATE @DayOfWeek
	SET 
		MonthCount = MonthCount + 1,
		QuarterCount = QuarterCount + 1,
		YearCount = YearCount + 1
	WHERE DOW = DATEPART(DW, @CurrentDate)

	SELECT
		@DayOfWeekInMonth = MonthCount,
		@DayOfQuarter = QuarterCount,
		@DayOfWeekInYear = YearCount
	FROM @DayOfWeek
	WHERE DOW = DATEPART(DW, @CurrentDate)
	
/*End day of week logic*/


/* Populate Your Dimension Table with values*/
	
	INSERT INTO [dbo].[DimDate]
	SELECT
		
		CONVERT (char(8),@CurrentDate,112) as DateKey,
		@CurrentDate AS Date,
		CONVERT (char(10),@CurrentDate,103) as FullDateUK,
		CONVERT (char(10),@CurrentDate,101) as FullDateUSA,
		DATEPART(DD, @CurrentDate) AS DayOfMonth,
		--Apply Suffix values like 1st, 2nd 3rd etc..
		CASE 
			WHEN DATEPART(DD,@CurrentDate) IN (11,12,13) _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'th'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 1 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'st'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 2 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'nd'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 3 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'rd'
			ELSE CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'th' 
			END AS DaySuffix,
		
		DATENAME(DW, @CurrentDate) AS DayName,
		DATEPART(DW, @CurrentDate) AS DayOfWeekUSA,

		-- check for day of week as Per US and change it as per UK format 
		CASE DATEPART(DW, @CurrentDate)
			WHEN 1 THEN 7
			WHEN 2 THEN 1
			WHEN 3 THEN 2
			WHEN 4 THEN 3
			WHEN 5 THEN 4
			WHEN 6 THEN 5
			WHEN 7 THEN 6
			END 
			AS DayOfWeekUK,
		
		@DayOfWeekInMonth AS DayOfWeekInMonth,
		@DayOfWeekInYear AS DayOfWeekInYear,
		@DayOfQuarter AS DayOfQuarter,
		DATEPART(DY, @CurrentDate) AS DayOfYear,
		DATEPART(WW, @CurrentDate) + 1 - DATEPART(WW, CONVERT(VARCHAR, _
		DATEPART(MM, @CurrentDate)) + '/1/' + CONVERT(VARCHAR, _
		DATEPART(YY, @CurrentDate))) AS WeekOfMonth,
		(DATEDIFF(DD, DATEADD(QQ, DATEDIFF(QQ, 0, @CurrentDate), 0), _
		@CurrentDate) / 7) + 1 AS WeekOfQuarter,
		DATEPART(WW, @CurrentDate) AS WeekOfYear,
		DATEPART(MM, @CurrentDate) AS Month,
		DATENAME(MM, @CurrentDate) AS MonthName,
		CASE
			WHEN DATEPART(MM, @CurrentDate) IN (1, 4, 7, 10) THEN 1
			WHEN DATEPART(MM, @CurrentDate) IN (2, 5, 8, 11) THEN 2
			WHEN DATEPART(MM, @CurrentDate) IN (3, 6, 9, 12) THEN 3
			END AS MonthOfQuarter,
		DATEPART(QQ, @CurrentDate) AS Quarter,
		CASE DATEPART(QQ, @CurrentDate)
			WHEN 1 THEN 'First'
			WHEN 2 THEN 'Second'
			WHEN 3 THEN 'Third'
			WHEN 4 THEN 'Fourth'
			END AS QuarterName,
		DATEPART(YEAR, @CurrentDate) AS Year,
		'CY ' + CONVERT(VARCHAR, DATEPART(YEAR, @CurrentDate)) AS YearName,
		LEFT(DATENAME(MM, @CurrentDate), 3) + '-' + CONVERT(VARCHAR, _
		DATEPART(YY, @CurrentDate)) AS MonthYear,
		RIGHT('0' + CONVERT(VARCHAR, DATEPART(MM, @CurrentDate)),2) + _
		CONVERT(VARCHAR, DATEPART(YY, @CurrentDate)) AS MMYYYY,
		CONVERT(DATETIME, CONVERT(DATE, DATEADD(DD, - (DATEPART(DD, _
		@CurrentDate) - 1), @CurrentDate))) AS FirstDayOfMonth,
		CONVERT(DATETIME, CONVERT(DATE, DATEADD(DD, - (DATEPART(DD, _
		(DATEADD(MM, 1, @CurrentDate)))), DATEADD(MM, 1, _
		@CurrentDate)))) AS LastDayOfMonth,
		DATEADD(QQ, DATEDIFF(QQ, 0, @CurrentDate), 0) AS FirstDayOfQuarter,
		DATEADD(QQ, DATEDIFF(QQ, -1, @CurrentDate), -1) AS LastDayOfQuarter,
		CONVERT(DATETIME, '01/01/' + CONVERT(VARCHAR, DATEPART(YY, _
		@CurrentDate))) AS FirstDayOfYear,
		CONVERT(DATETIME, '12/31/' + CONVERT(VARCHAR, DATEPART(YY, _
		@CurrentDate))) AS LastDayOfYear,
		NULL AS IsHolidayUSA,
		CASE DATEPART(DW, @CurrentDate)
			WHEN 1 THEN 0
			WHEN 2 THEN 1
			WHEN 3 THEN 1
			WHEN 4 THEN 1
			WHEN 5 THEN 1
			WHEN 6 THEN 1
			WHEN 7 THEN 0
			END AS IsWeekday,
		NULL AS HolidayUSA, Null, Null

	SET @CurrentDate = DATEADD(DD, 1, @CurrentDate)
END

/********************************************************************************************/
 
Step 3.
Update Values of Holiday as per UK Government Declaration for National Holiday.

/*Update HOLIDAY fields of UK as per Govt. Declaration of National Holiday*/
	
-- Good Friday  April 18 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Good Friday'
	WHERE [Month] = 4 AND [DayOfMonth]  = 18

-- Easter Monday  April 21 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Easter Monday'
	WHERE [Month] = 4 AND [DayOfMonth]  = 21

-- Early May Bank Holiday   May 5 
   UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Early May Bank Holiday'
	WHERE [Month] = 5 AND [DayOfMonth]  = 5

-- Spring Bank Holiday  May 26 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Spring Bank Holiday'
	WHERE [Month] = 5 AND [DayOfMonth]  = 26

-- Summer Bank Holiday  August 25 
    UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Summer Bank Holiday'
	WHERE [Month] = 8 AND [DayOfMonth]  = 25

-- Boxing Day  December 26  	
    UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Boxing Day'
	WHERE [Month] = 12 AND [DayOfMonth]  = 26	

--CHRISTMAS
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Christmas Day'
	WHERE [Month] = 12 AND [DayOfMonth]  = 25

--New Years Day
	UPDATE [dbo].[DimDate]
		SET HolidayUK  = 'New Year''s Day'
	WHERE [Month] = 1 AND [DayOfMonth] = 1

--Update flag for UK Holidays 1= Holiday, 0=No Holiday
	
	UPDATE [dbo].[DimDate]
		SET IsHolidayUK  = CASE WHEN HolidayUK   IS NULL _
		THEN 0 WHEN HolidayUK   IS NOT NULL THEN 1 END
		
 
Step 4.
Update Values of Holiday as per USA Govt. Declaration for National Holiday.

/*Update HOLIDAY Field of USA In dimension*/
	
 	/*THANKSGIVING - Fourth THURSDAY in November*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Thanksgiving Day'
	WHERE
		[Month] = 11 
		AND [DayOfWeekUSA] = 'Thursday' 
		AND DayOfWeekInMonth = 4

	/*CHRISTMAS*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Christmas Day'
		
	WHERE [Month] = 12 AND [DayOfMonth]  = 25

	/*4th of July*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Independance Day'
	WHERE [Month] = 7 AND [DayOfMonth] = 4

	/*New Years Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'New Year''s Day'
	WHERE [Month] = 1 AND [DayOfMonth] = 1

	/*Memorial Day - Last Monday in May*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Memorial Day'
	FROM [dbo].[DimDate]
	WHERE DateKey IN 
		(
		SELECT
			MAX(DateKey)
		FROM [dbo].[DimDate]
		WHERE
			[MonthName] = 'May'
			AND [DayOfWeekUSA]  = 'Monday'
		GROUP BY
			[Year],
			[Month]
		)

	/*Labor Day - First Monday in September*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Labor Day'
	FROM [dbo].[DimDate]
	WHERE DateKey IN 
		(
		SELECT
			MIN(DateKey)
		FROM [dbo].[DimDate]
		WHERE
			[MonthName] = 'September'
			AND [DayOfWeekUSA] = 'Monday'
		GROUP BY
			[Year],
			[Month]
		)

	/*Valentine's Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Valentine''s Day'
	WHERE
		[Month] = 2 
		AND [DayOfMonth] = 14

	/*Saint Patrick's Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Saint Patrick''s Day'
	WHERE
		[Month] = 3
		AND [DayOfMonth] = 17

	/*Martin Luthor King Day - Third Monday in January starting in 1983*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Martin Luthor King Jr Day'
	WHERE
		[Month] = 1
		AND [DayOfWeekUSA]  = 'Monday'
		AND [Year] >= 1983
		AND DayOfWeekInMonth = 3

	/*President's Day - Third Monday in February*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'President''s Day'
	WHERE
		[Month] = 2
		AND [DayOfWeekUSA] = 'Monday'
		AND DayOfWeekInMonth = 3

	/*Mother's Day - Second Sunday of May*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Mother''s Day'
	WHERE
		[Month] = 5
		AND [DayOfWeekUSA] = 'Sunday'
		AND DayOfWeekInMonth = 2

	/*Father's Day - Third Sunday of June*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Father''s Day'
	WHERE
		[Month] = 6
		AND [DayOfWeekUSA] = 'Sunday'
		AND DayOfWeekInMonth = 3

	/*Halloween 10/31*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Halloween'
	WHERE
		[Month] = 10
		AND [DayOfMonth] = 31

	/*Election Day - The first Tuesday after the first Monday in November*/
	BEGIN
	DECLARE @Holidays TABLE (ID INT IDENTITY(1,1), _
	DateID int, Week TINYINT, YEAR CHAR(4), DAY CHAR(2))

		INSERT INTO @Holidays(DateID, [Year],[Day])
		SELECT
			DateKey,
			[Year],
			[DayOfMonth] 
		FROM [dbo].[DimDate]
		WHERE
			[Month] = 11
			AND [DayOfWeekUSA] = 'Monday'
		ORDER BY
			YEAR,
			DayOfMonth 

		DECLARE @CNTR INT, @POS INT, @STARTYEAR INT, @ENDYEAR INT, @MINDAY INT

		SELECT
			@CURRENTYEAR = MIN([Year])
			, @STARTYEAR = MIN([Year])
			, @ENDYEAR = MAX([Year])
		FROM @Holidays

		WHILE @CURRENTYEAR <= @ENDYEAR
		BEGIN
			SELECT @CNTR = COUNT([Year])
			FROM @Holidays
			WHERE [Year] = @CURRENTYEAR

			SET @POS = 1

			WHILE @POS <= @CNTR
			BEGIN
				SELECT @MINDAY = MIN(DAY)
				FROM @Holidays
				WHERE
					[Year] = @CURRENTYEAR
					AND [Week] IS NULL

				UPDATE @Holidays
					SET [Week] = @POS
				WHERE
					[Year] = @CURRENTYEAR
					AND [Day] = @MINDAY

				SELECT @POS = @POS + 1
			END

			SELECT @CURRENTYEAR = @CURRENTYEAR + 1
		END

		UPDATE [dbo].[DimDate]
			SET HolidayUSA  = 'Election Day'				
		FROM [dbo].[DimDate] DT
			JOIN @Holidays HL ON (HL.DateID + 1) = DT.DateKey
		WHERE
			[Week] = 1
	END
	--set flag for USA holidays in Dimension
	UPDATE [dbo].[DimDate]
SET IsHolidayUSA = CASE WHEN HolidayUSA  IS NULL THEN 0 WHEN HolidayUSA  IS NOT NULL THEN 1 END
/*****************************************************************************************/


Reference: http://www.codeproject.com/Articles/647950/Create-and-Populate-Date-Dimension-for-Data-Wareho

Create & Populate Time Dimension with 24 Hour+ Values

A common task most of us face while setting up a new data warehouse is creating a time dimension.

This tip will especially help those people who work in Business Intelligence and whenever as a starting point they need to set new data warehouse, during this time they need to create and fill their time dimension with the necessary values.

I have searched the internet to find T-SQL script which can create and fill time dimension with 24 hour plus values. I did not find any readymade script, then I invested my time to create this script and am now sharing with all so that it can help everyone.

The given time dimension script will create table of time dimension and populate it with appropriate values. It also creates time buckets in table and fills it with group values, so that the user can perform aggregation of data using various combinations of hourly time buckets or day time buckets and they can do analysis of data using these time buckets and can do study of trend over the entire day.


CREATE TABLE [dbo].[DimTime](
[TimeKey] [int] NOT NULL,
[TimeAltKey] [int] NOT NULL,
[Time30] [varchar](8) NOT NULL,
[Hour30] [tinyint] NOT NULL,
[MinuteNumber] [tinyint] NOT NULL,
[SecondNumber] [tinyint] NOT NULL,
[TimeInSecond] [int] NOT NULL,
[HourlyBucket] varchar(15)not null,
[DayTimeBucketGroupKey] int not null,
[DayTimeBucket] varchar(100) not null
CONSTRAINT [PK_DimTime] PRIMARY KEY CLUSTERED
(
[TimeKey] ASC
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/***** Create Stored procedure In Test_DW and Run SP To Fill Time Dimension with Values****/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FillDimTime]
as
BEGIN
–Specify Total Number of Hours You need to fill in Time Dimension
DECLARE @Size INTEGER
–iF @Size=32 THEN This will Fill values Upto 32:59 hr in Time Dimension
Set @Size=23
DECLARE @hour INTEGER
DECLARE @minute INTEGER
DECLARE @second INTEGER
DECLARE @k INTEGER
DECLARE @TimeAltKey INTEGER
DECLARE @TimeInSeconds INTEGER
DECLARE @Time30 varchar(25)
DECLARE @Hour30 varchar(4)
DECLARE @Minute30 varchar(4)
DECLARE @Second30 varchar(4)
DECLARE @HourBucket varchar(15)
DECLARE @HourBucketGroupKey int
DECLARE @DayTimeBucket varchar(100)
DECLARE @DayTimeBucketGroupKey int
SET @hour = 0
SET @minute = 0
SET @second = 0
SET @k = 0
SET @TimeAltKey = 0
WHILE(@hour<= @Size )
BEGIN
if (@hour <10 )
begin
set @Hour30 = ‘0’ + cast( @hour as varchar(10))
end
else
begin
set @Hour30 = @hour
end
–Create Hour Bucket Value
set @HourBucket= @Hour30+’:00′ +’-‘ +@Hour30+’:59′
WHILE(@minute <= 59)
BEGIN
WHILE(@second <= 59)
BEGIN
set @TimeAltKey = @hour *10000 +@minute*100 +@second
set @TimeInSeconds =@hour * 3600 + @minute *60 +@second
If @minute <10
begin
set @Minute30 = ‘0’ + cast ( @minute as varchar(10) )
end
else
begin
set @Minute30 = @minute
end
if @second <10
begin
set @Second30 = ‘0’ + cast ( @second as varchar(10) )
end
else
begin
set @Second30 = @second
end
–Concatenate values for Time30
set @Time30 = @Hour30 +’:’+@Minute30 +’:’+@Second30
–DayTimeBucketGroupKey can be used in Sorting of DayTime Bucket In proper Order
SELECT @DayTimeBucketGroupKey =
CASE
WHEN (@TimeAltKey >= 00000 AND @TimeAltKey <= 25959) THEN 0
WHEN (@TimeAltKey >= 30000 AND @TimeAltKey <= 65959) THEN 1
WHEN (@TimeAltKey >= 70000 AND @TimeAltKey <= 85959) THEN 2
WHEN (@TimeAltKey >= 90000 AND @TimeAltKey <= 115959) THEN 3
WHEN (@TimeAltKey >= 120000 AND @TimeAltKey <= 135959)THEN 4
WHEN (@TimeAltKey >= 140000 AND @TimeAltKey <= 155959)THEN 5
WHEN (@TimeAltKey >= 50000 AND @TimeAltKey <= 175959) THEN 6
WHEN (@TimeAltKey >= 180000 AND @TimeAltKey <= 235959)THEN 7
WHEN (@TimeAltKey >= 240000) THEN 8
END
–print @DayTimeBucketGroupKey
— DayTimeBucket Time Divided in Specific Time Zone
— So Data can Be Grouped as per Bucket for Analyzing as per time of day
SELECT @DayTimeBucket =
CASE
WHEN (@TimeAltKey >= 00000 AND @TimeAltKey <= 25959) THEN ‘Late Night (00:00 AM To 02:59 AM)’
WHEN (@TimeAltKey >= 30000 AND @TimeAltKey <= 65959) THEN ‘Early Morning(03:00 AM To 6:59 AM)’
WHEN (@TimeAltKey >= 70000 AND @TimeAltKey <= 85959) THEN ‘AM Peak (7:00 AM To 8:59 AM)’
WHEN (@TimeAltKey >= 90000 AND @TimeAltKey <= 115959) THEN ‘Mid Morning (9:00 AM To 11:59 AM)’
WHEN (@TimeAltKey >= 120000 AND @TimeAltKey <= 135959) THEN ‘Lunch (12:00 PM To 13:59 PM)’
WHEN (@TimeAltKey >= 140000 AND @TimeAltKey <= 155959) THEN ‘Mid Afternoon (14:00 PM To 15:59 PM)’
WHEN (@TimeAltKey >= 50000 AND @TimeAltKey <= 175959) THEN ‘PM Peak (16:00 PM To 17:59 PM)’
WHEN (@TimeAltKey >= 180000 AND @TimeAltKey <= 235959) THEN ‘Evening (18:00 PM To 23:59 PM)’
WHEN (@TimeAltKey >= 240000) THEN ‘Previous Day Late Night (24:00 PM to ‘+cast( @Size as varchar(10)) +’:00 PM )’
END
— print @DayTimeBucket
INSERT into DimTime (TimeKey,TimeAltKey,[Time30] ,[Hour30] ,[MinuteNumber],[SecondNumber],[TimeInSecond],[HourlyBucket],DayTimeBucketGroupKey,DayTimeBucket)
VALUES (@k,@TimeAltKey ,@Time30 ,@hour ,@minute,@Second , @TimeInSeconds,@HourBucket,@DayTimeBucketGroupKey,@DayTimeBucket )
SET @second = @second + 1
SET @k = @k + 1
END
SET @minute = @minute + 1
SET @second = 0
END
SET @hour = @hour + 1
SET @minute =0
END
END
Go
Exec [FillDimTime]
go
select * from DimTime

Reference: http://www.codeproject.com/Tips/642912/Create-Populate-Time-Dimension-with-24-Hourplus-Va

Learn X++ to become Microsoft Dynamic AX developer

Do you want to be Microsoft Dynamics AX Developer? I assume you are. Learn how to develop on Microsoft Dynamics AX is not as easy as it seems, it is not as easy as learning to developer using other platforms like Microsoft SharePoint. The material provided from Microsoft is not that one will give you the full picture about the platform, it will give you only the guidelines and you have to go and search and search and search and search. It took me 2 full years to know what I knew now, it is not much but I can do many things not included in the documentations.

So I decided to write this article to make it short on new developers whom want to learn about developing with Microsoft Dynamics AX and even help whom already Microsoft Dynamics AX developers.

The Platform:

Programming Language:

Microsoft Dynamics AX is built with its programming language called X++, it is like C++, C#, or Java, it is Object Oriented, if you know one of these languages you will be familiar with X++.

IDE (Integrated Development Environment) :

Microsoft Dynamics AX has its own IDE called MorphX, you will use it during all your development tasks, it is very easy and support drag and drop to make developer life easier but from my opinion the only cons is it is not supporting multi-monitor, you need to work in one window in one monitor, it is not like Visual Studio you can drag the Toolbox bar and drop it to other monitor.

The Journey:

I started by reading the Microsoft Student Training Document you need to have access to Microsoft Partner Source or Customer Source, then start to develop and found out these documents is not enough and I need to learn something more, I started to Googling or Binging and land on small parts that are helping but you need to add yours to make it full solution, for example I need to create a Sales Order and Sales Line from the code, I found sample code to create one using job (Job in AX is small runnable code) not how to use it inside a form whether to implement it in Class, Form, or Table, and if you will implement it in Form in which level DataSource or Design, I was confused and none on web can help me in setting things together till I found this great free book called MorhpX IT it was written for AX 4.0 but it still have useful information works with AX 2012 R2 I encourage you to read it after reading Student Training Documents, I really got the full picture of the platform and now I feel like I’m understand what I’m doing.

Materials:

Books:

Inside Microsoft Dynamics AX 2012

lrg

Microsoft Dynamics AX 2012 Development Cookbook

4644EN_Microsoft Dynamics AX 2012 Development Cookbook_Low

Microsoft Dynamics AX 2012 Services

7546EN_cov

Microsoft Dynamics AX 2012 Security How-To

7508ENcov

Inside Microsoft Dynamics® AX 2009

lrg (1)

Microsoft Dynamics AX 2009 Programming: Getting Started

7306_MockupCover

Microsoft Dynamics AX 2009 Development Cookbook

9423_MockupCover

MorphX IT

product_thumbnail

OCR and Android

On Device or In The Cloud ?

Before deciding on an OCR library, one needs to decide, where the OCR process should take place: on the Smartphone or in the Cloud. Each approach has its advantages.
On device OCR can be performed without requiring an Internet connection and instead of sending a photo, which can potentially be huge (many phones have 8 or 12 Mega-Pixel cameras now), the text is recognized by an on-board OCR-engine.
However, OCR-libraries tend to be large, i.e. the mobile application will be of considerable size. Depending on the amount of text that needs to be recognized and the available data transfer speed, a cloud-service may provide the result faster. A cloud-service can be updated more easily but individually optimizing (training) an OCR engine may work better when done locally on the device.

Which OCR Library to choose ?

After taking a closer look at the all comparisons, Tesseract stands out. It provides good accuracy, it’s open-source and Apache-Licensed, and has broad language support. It was created by HP and is now developed by Google.

Also, since Tesseract is open source and Apache- Licensed, we can take the source and port it to the Android platform, or put it on a Web-server to run our very own Cloud-service.

A Tesseract is a four- dimensional object, much like a cube is a three-dimensional object. A square has two dimensions. You can make a cube from six squares. A cube has three dimensions. The tesseract is made in the same way, but in four dimensions.

1. Tesseract

The Tesseract OCR engine was developed at Hewlett Packard Labs and is currently sponsored by Google. It was among the top three OCR engines in terms of character accuracy in 1995. http://code.google.com/p/tesseract-ocr/

1.1. Running Tesseract locally on a Mac

Like with so make other Unix and Linux tools, Homebrew (http://mxcl.github.com/homebrew/) is the easiest and most flexible way to install the UNIX tools Apple didn’t include with OS X. Once Homebrew is installed (https://github.com/mxcl/homebrew/wiki/installation), Tesseract can be installed on OS X as easy as:
$ brew install tesseract
Once installed,
$ brew info tesseract will return something like this:

tesseract 3.00

http://code.google.com/p/tesseract-ocr/

Depends on: libtiff
/usr/local/Cellar/tesseract/3.00 (316 files, 11M)
Tesseract is an OCR (Optical Character Recognition) engine.
The easiest way to use it is to convert the source to a Grayscale tiff:
`convert source.png -type Grayscale terre_input.tif`
then run tesseract:
`tesseract terre_input.tif output`

http://github.com/mxcl/homebrew/commits/master/Library/Formula/tesseract.rb


Tesseract doesn’t come with a GUI and instead runs from a command-line interface. To OCR a TIFF-encoded image located on your desktop, you would do something like this:
$ tesseract ~/Desktop/cox.tiff ~/Desktop/cox
Using the image below, Tesseract wrote with perfect accuracy the resulting text into
~/Desktop/cox.txt

There are at least two projects, providing a GUI-front-end for Tesseract on OS X

  1. TesseractGUI, a native OSX client: http://download.dv8.ro/files/TesseractGUI/
  2. VietOCR, a Java Client: http://vietocr.sourceforge.net/

1.2. Running Tesseract as a Could-Service on a Linux Server

One of the fastest and easiest ways to deploy Tesseract as a Web-service, uses Tornado (http://www.tornadoweb.org/), an open source (Apache Licensed) Python non-blocking web server. Since Tesseract accepts TIFF encoded images but our Cloud-Service should rather work with the more popular JPEG image format, we also need to deploy the free Python Imaging Library (http://www.pythonware.com/products/pil/), license terms are here: http://www.pythonware.com/products/pil/license.htm

The deployment on Ubuntu 11.10 64-bit server looks something like this:

sudo apt-get install python-tornado
sudo apt-get install python-imaging
sudo apt-get install tesseract-ocr

1.2.1. The HTTP Server-Script for port 8080

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#!/usr/bin/env python
import tornado.httpserver
import tornado.ioloop
import tornado.web
import pprint
import Image
from tesseract import image_to_string
import StringIO
import os.path
import uuid
class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write('</pre>
<form action="/" method="post" enctype="multipart/form-data">' '
<input type="file" name="the_file" />' '
<input type="submit" value="Submit" />' '</form>
<pre class="prettyprint">')
    def post(self):
        self.set_header("Content-Type", "text/html")
    self.write("") # create a unique ID file
        tempname = str(uuid.uuid4()) + ".jpg"
        myimg = Image.open(StringIO.StringIO(self.request.files.items()[0][1][0  ['body']))
        myfilename = os.path.join(os.path.dirname(__file__),"static",tempname);
        # save image to file as JPEG
        myimg.save(myfilename)
        # do OCR, print result
        self.write(image_to_string(myimg))
        self.write("")
settings = {
    "static_path": os.path.join(os.path.dirname(__file__), "static"),
}
application = tornado.web.Application([
    (r"/", MainHandler),
], **settings)
if __name__ == "__main__":
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(8080)
    tornado.ioloop.IOLoop.instance().start()

The Server receives a JPEG image file and stores it locally in the ./static directory, before calling image_to_string, which is defined in the Python script below:

1.2.2. image_to_string function implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
#!/usr/bin/env python
tesseract_cmd = 'tesseract'
import Image
import StringIO
import subprocess
import sys
import os
__all__ = ['image_to_string']
def run_tesseract(input_filename, output_filename_base, lang=None, boxes=False):
    '''
    runs the command:
        `tesseract_cmd` `input_filename` `output_filename_base`
    returns the exit status of tesseract, as well as tesseract's stderr output
    '''
    command = [tesseract_cmd, input_filename, output_filename_base]
    if lang is not None:
        command += ['-l', lang]
    if boxes:
        command += ['batch.nochop', 'makebox']
    proc = subprocess.Popen(command,
            stderr=subprocess.PIPE)
    return (proc.wait(), proc.stderr.read())
def cleanup(filename):
    ''' tries to remove the given filename. Ignores non-existent files '''
    try:
        os.remove(filename)
    except OSError:
        pass
def get_errors(error_string):
    '''
    returns all lines in the error_string that start with the string "error"
    '''
    lines = error_string.splitlines()
    error_lines = tuple(line for line in lines if line.find('Error') >= 0)
    if len(error_lines) > 0:
        return '\n'.join(error_lines)
    else:
        return error_string.strip()
def tempnam():
    ''' returns a temporary file-name '''
    # prevent os.tmpname from printing an error...
    stderr = sys.stderr
    try:
        sys.stderr = StringIO.StringIO()
        return os.tempnam(None, 'tess_')
    finally:
        sys.stderr = stderr
class TesseractError(Exception):
    def __init__(self, status, message):
        self.status = status
        self.message = message
        self.args = (status, message)
def image_to_string(image, lang=None, boxes=False):
    '''
    Runs tesseract on the specified image. First, the image is written to disk,
    and then the tesseract command is run on the image. Resseract's result is
    read, and the temporary files are erased.
    '''
    input_file_name = '%s.bmp' % tempnam()
    output_file_name_base = tempnam()
    if not boxes:
        output_file_name = '%s.txt' % output_file_name_base
    else:
        output_file_name = '%s.box' % output_file_name_base
    try:
        image.save(input_file_name)
        status, error_string = run_tesseract(input_file_name,
                                             output_file_name_base,
                                             lang=lang,
                                             boxes=boxes)
        if status:
            errors = get_errors(error_string)
            raise TesseractError(status, errors)
        f = file(output_file_name)
        try:
            return f.read().strip()
        finally:
            f.close()
    finally:
        cleanup(input_file_name)
        cleanup(output_file_name)
if __name__ == '__main__':
    if len(sys.argv) == 2:
        filename = sys.argv[1]
        try:
            image = Image.open(filename)
        except IOError:
            sys.stderr.write('ERROR: Could not open file "%s"\n' % filename)
            exit(1)
        print image_to_string(image)
    elif len(sys.argv) == 4 and sys.argv[1] == '-l':
        lang = sys.argv[2]
        filename = sys.argv[3]
        try:
            image = Image.open(filename)
        except IOError:
            sys.stderr.write('ERROR: Could not open file "%s"\n' % filename)
            exit(1)
        print image_to_string(image, lang=lang)
    else:
        sys.stderr.write('Usage: python tesseract.py [-l language] input_file\n')
        exit(2)

1.2.3. The Service deploy/start Script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
description  "OCR WebService"
start on runlevel [2345]
stop on runlevel [!2345]
pre-start script
mkdir /tmp/ocr
mkdir /tmp/ocr/static
cp /usr/share/ocr/*.py /tmp/ocr
end script
exec /tmp/ocr/tesserver.py

After the service has been started, it can be accessed through a Web browser like shown here: http://proton.techcasita.com:8080 I’m currently running tesseract 3.01 on Ubuntu Linux 11.10 64-bit, please be gentle, it runs on an Intel Atom CPU 330 @ 1.60GHz, 4 cores (typically found in Netbooks)The HTML encoded result looks something like this:

1
2
3
4
5
<html><body>Contact Us
www. cox.com
Customer Serv <span class="skype_c2c_print_container notranslate">760-788-9000</span><span id="skype_c2c_container" class="skype_c2c_container notranslate" dir="ltr" tabindex="-1" onmouseover="SkypeClick2Call.MenuInjectionHandler.showMenu(this, event)" onmouseout="SkypeClick2Call.MenuInjectionHandler.hideMenu(this, event)" onclick="SkypeClick2Call.MenuInjectionHandler.makeCall(this, event)" data-numbertocall="+17607889000" data-isfreecall="false" data-isrtl="false" data-ismobile="false"><span class="skype_c2c_highlighting_inactive_common" dir="ltr" skypeaction="skype_dropdown"><span class="skype_c2c_textarea_span" id="non_free_num_ui"><img width="0" height="0" class="skype_c2c_logo_img" src="chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl/call_skype_logo.png"><span class="skype_c2c_text_span">760-788-9000</span><span class="skype_c2c_free_text_span"></span></span></span></span>
Repair 76O—788~71O0
Cox Telephone <span class="skype_c2c_print_container notranslate">888-222-7743</span><span id="skype_c2c_container" class="skype_c2c_container notranslate" dir="ltr" tabindex="-1" onmouseover="SkypeClick2Call.MenuInjectionHandler.showMenu(this, event)" onmouseout="SkypeClick2Call.MenuInjectionHandler.hideMenu(this, event)" onclick="SkypeClick2Call.MenuInjectionHandler.makeCall(this, event)" data-numbertocall="+18882227743" data-isfreecall="true" data-isrtl="false" data-ismobile="false"><span class="skype_c2c_highlighting_inactive_common" dir="ltr" skypeaction="skype_dropdown"><span class="skype_c2c_textarea_span" id="free_num_ui"><img width="0" height="0" class="skype_c2c_logo_img" src="chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl/call_skype_logo.png"><span class="skype_c2c_text_span">888-222-7743</span><span class="skype_c2c_free_text_span"> FREE</span></span></span></span></body></html>

1.3 Accessing the Tesseract Cloud-Service from Android

The OCRTaskActivity below utilizes Android’s built-in AsyncTask as well as Apache Software Foundation’s HttpComponent library HttpClient4.1.2, available here: http://hc.apache.org/httpcomponents-client-ga/index.html OCRTaskActivity expects the image to be passed in as the Intent Extra “ByteArray” of type ByteArray. The OCR result is returned to the calling Activity as OCR_TEXT, like shown here:

setResult(Activity.RESULT_OK, getIntent().putExtra("OCR_TEXT", result));
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
import android.app.Activity;
import android.graphics.BitmapFactory;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.ProgressBar;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.mime.HttpMultipartMode;
import org.apache.http.entity.mime.MultipartEntity;
import org.apache.http.entity.mime.content.ByteArrayBody;
import org.apache.http.entity.mime.content.StringBody;
import org.apache.http.impl.client.DefaultHttpClient;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class OCRTaskActivity extends Activity {
    private static String LOG_TAG = OCRAsyncTaskActivity.class.getSimpleName();
    private static String[] URL_STRINGS = {"http://proton.techcasita.com:8080"};
    private byte[] mBA;
    private ProgressBar mProgressBar;
    @Override
    public void onCreate(final Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.ocr);
        mBA = getIntent().getExtras().getByteArray("ByteArray");
        ImageView iv = (ImageView) findViewById(R.id.ImageView);
        iv.setImageBitmap(BitmapFactory.decodeByteArray(mBA, 0, mBA.length));
        mProgressBar = (ProgressBar) findViewById(R.id.progressBar);
        OCRTask task = new OCRTask();
        task.execute(URL_STRINGS);
    }
    private class OCRTask extends AsyncTask {
        @Override
        protected String doInBackground(final String... urls) {
            String response = "";
            for (String url : urls) {
                try {
                    response = executeMultipartPost(url, mBA);
                    Log.v(LOG_TAG, "Response:" + response);
                    break;
                } catch (Throwable ex) {
                    Log.e(LOG_TAG, "error: " + ex.getMessage());
                }
            }
            return response;
        }
        @Override
        protected void onPostExecute(final String result) {
            mProgressBar.setVisibility(View.GONE);
            setResult(Activity.RESULT_OK, getIntent().putExtra("OCR_TEXT", result));
            finish();
        }
    }
    private String executeMultipartPost(final String stringUrl, final byte[] bm) throws Exception {
        HttpClient httpClient = new DefaultHttpClient();
        HttpPost postRequest = new HttpPost(stringUrl);
        ByteArrayBody bab = new ByteArrayBody(bm, "the_image.jpg");
        MultipartEntity reqEntity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE);
        reqEntity.addPart("uploaded", bab);
        reqEntity.addPart("name", new StringBody("the_file"));
        postRequest.setEntity(reqEntity);
        HttpResponse response = httpClient.execute(postRequest);
        BufferedReader reader = new BufferedReader(new InputStreamReader(response.getEntity().getContent(), "UTF-8"));
        String sResponse;
        StringBuilder s = new StringBuilder();
        while ((sResponse = reader.readLine()) != null) {
            s = s.append(sResponse).append('\n');
        }
        int i = s.indexOf("body");
        int j = s.lastIndexOf("body");
        return s.substring(i + 5, j - 2);
    }
}

1.4. Building a Tesseract native Android Library to be bundled with an Android App

This approach allow an Android application to perform OCR even without a network connection. I.e. the OCR engine is on-board. There are currently two source-bases to start from, the original Tesseract project here:

  1. Tesseract Tools for Android is a set of Android APIs and build files for the Tesseract OCR and Leptonica image processing libraries:
    svn checkout http://tesseract-android-tools.googlecode.com/svn/trunk/ tesseract-android-tools
  2. A fork of Tesseract Tools for Android (tesseract-android-tools) that adds some additional functions:
    git clone git://github.com/rmtheis/tess-two.git

… I went with option 2.

1.4.1. Building the native lib

Each project can be build with the same build steps (see below) and neither works with Android’s NDK r7. However, going back to NDK r6b solved that problem. Here are the build steps. It takes a little while, even on a fast machine.

1
2
3
4
5
6
7
cd <project-directory>/tess-two
export TESSERACT_PATH=${PWD}/external/tesseract-3.01
export LEPTONICA_PATH=${PWD}/external/leptonica-1.68
export LIBJPEG_PATH=${PWD}/external/libjpeg
ndk-build
android update project --path .
ant release

The build-steps create the native libraries in the libs/armabi and libs/armabi-v7a directories.

The tess-two project can now be included as a library-project into an Android project and with the JNI layer in place, calling into the native OCR library now looks something like this:

1.4.2. Developing a simple Android App with built-in OCR capabilities

1
2
3
4
5
6
7
...
TessBaseAPI baseApi = new TessBaseAPI();
baseApi.init(DATA_PATH, LANG);
baseApi.setImage(bitmap);
String recognizedText = baseApi.getUTF8Text();
baseApi.end();
...

1.4.2.1. Libraries / TrainedData / App Size

The native libraries are about 3 MBytes in size. Additionally, a language and font depending training resource files is needed.
The eng.traineddata file (e.g. available with the desktop version of Tesseract) is placed into the main android’s assers/tessdata folder and deployed with the application, adding another 2 MBytes to the app. However, due to compression, the actual downloadable Android application is “only” about 4.1 MBytes.

During the first start of the application, the eng.traineddata resource file is copied to the phone’s SDCard.

The ocr() method for the sample app may look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
protected void ocr() {
        BitmapFactory.Options options = new BitmapFactory.Options();
        options.inSampleSize = 2;
        Bitmap bitmap = BitmapFactory.decodeFile(IMAGE_PATH, options);
        try {
            ExifInterface exif = new ExifInterface(IMAGE_PATH);
            int exifOrientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
            Log.v(LOG_TAG, "Orient: " + exifOrientation);
            int rotate = 0;
            switch (exifOrientation) {
                case ExifInterface.ORIENTATION_ROTATE_90:
                    rotate = 90;
                    break;
                case ExifInterface.ORIENTATION_ROTATE_180:
                    rotate = 180;
                    break;
                case ExifInterface.ORIENTATION_ROTATE_270:
                    rotate = 270;
                    break;
            }
            Log.v(LOG_TAG, "Rotation: " + rotate);
            if (rotate != 0) {
                // Getting width & height of the given image.
                int w = bitmap.getWidth();
                int h = bitmap.getHeight();
                // Setting pre rotate
                Matrix mtx = new Matrix();
                mtx.preRotate(rotate);
                // Rotating Bitmap
                bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
                // tesseract req. ARGB_8888
                bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
            }
        } catch (IOException e) {
            Log.e(LOG_TAG, "Rotate or coversion failed: " + e.toString());
        }
        ImageView iv = (ImageView) findViewById(R.id.image);
        iv.setImageBitmap(bitmap);
        iv.setVisibility(View.VISIBLE);
        Log.v(LOG_TAG, "Before baseApi");
        TessBaseAPI baseApi = new TessBaseAPI();
        baseApi.setDebug(true);
        baseApi.init(DATA_PATH, LANG);
        baseApi.setImage(bitmap);
        String recognizedText = baseApi.getUTF8Text();
        baseApi.end();
        Log.v(LOG_TAG, "OCR Result: " + recognizedText);
        // clean up and show
        if (LANG.equalsIgnoreCase("eng")) {
            recognizedText = recognizedText.replaceAll("[^a-zA-Z0-9]+", " ");
        }
        if (recognizedText.length() != 0) {
            ((TextView) findViewById(R.id.field)).setText(recognizedText.trim());
        }
    }

OCR on Android

The popularity of smartphones, combined with built-in high-quality cameras has created a new category of mobile applications, benefiting greatly from OCR.

OCR is very mature technology with a broad range of available libraries to chose from. There are Apache and BSD licensed, fast and accurate solutions available from the open-source community, I have taken a closer look at Tesseract, which is developed by HP and Google.

Tesseract can be used to build a Desktop application, a CloudService, and even baked into a mobile Android application, performing on-board OCR. All three variation of OCR with the Tesseract library have been demonstrated above.

Focussing on mobile applications, however, it became very clear that even on phones with a 5MP camera, the accuracy of the results still vary greatly, depending on lighting conditions, font, and font-sizes, as well as surrounding artifact.

Just like with the TeleForm application, even the best OCR engines perform purely, if the input-image has not been prepared correctly. To make OCR work on a mobile device, no matter if the OCR will eventually be run onboard or in the cloud, much development time needs to be spend to train the engine – but even more importantly, to select and prepare the image areas that will be provided as input to the OCR engine – it’s going to be all about the pre-processing.

 

Reference: http://wolfpaulus.com/jounal/android-journal/android-and-ocr/

Top 5 Tools for network security monitoring

Security data can be found on virtually all systems in a corporate network. However, all systems do not provide equally valuable security context. While monitoring everything would be ideal, this is impractical for most organizations due to resource constraints. So what data sources should you prioritize to make the most of your monitoring efforts?

When it comes to security monitoring, context is the key. The more relevant security context you have, the more likely it is you will successfully detect real security incidents while weeding out false positives (e.g. non-threats). In determining which devices and systems to monitor for security data, the first priority is to give yourself as much useful context as possible.

Based on a decade of monitoring experience, SecureWorks believes the top five sources of security context are:

Number One: Network-based Intrusion Detection and Prevention Systems (NIDS/NIPS)

NIDS and NIPS devices use signatures to detect security events on your network. Performing full packet inspection of network traffic at the perimeter or across key network segments, most NIDS/NIPS devices provide detailed alerts that help to detect:

  • Known vulnerability exploit attempts
  • Known Trojan activity
  • Anomalous behavior (depending on the IDS/IPS)
  • Port and Host scans

Number Two: Firewalls

Serving as the network’s gatekeeper, firewalls allow and log incoming and outgoing network connections based on your policies. Some firewalls also have basic NIDS/NIPS signatures to detect security events. Monitoring firewall logs and alerts helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Worm outbreaks
  • Minor anomalous behavior
  • Most any activity denied by firewall policy

Number Three: Host-based Intrusion Detection and Prevention Systems (HIDS/HIPS)

Like NIDS/NIPS, host-based intrusion detection and prevention systems utilize signatures to detect security events. But instead of inspecting network traffic, HIDS/HIPS agents are installed on servers to directly alert on security activity. Monitoring HIDS/HIPS alerts helps to detect:

  • Known vulnerability exploit attempts
  • Console exploit attempts
  • Exploit attempts performed over encrypted channels
  • Password grinding (manual or automated attempts to guess passwords)
  • Anomalous behavior by users or applications

Number Four: Network Devices with Access Control Lists (ACLs)

Network devices that can use ACLs, such as routers and VPN servers, have the ability to control network traffic based on permitted networks and hosts. Monitoring logs from devices with ACLs helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Minor anomalous behavior
  • Most anything denied by the ACL’s

Number Five: Server and Application Logs

Many types of servers and applications log events such as login attempts and user activity. Depending on the extent of logging capabilities, monitoring server and application logs can help to detect:

  • Known and unknown exploit attempts
  • Password Grinding
  • Anomalous behavior by users or applications

It is important to understand that the incremental value of a data source will vary from situation to situation. A source’s purpose, its location in your network and the quality of the data it provides are a few of the many variables that must be considered when planning your security monitoring strategy.

Keep in mind that there are many other security technologies, network devices and log sources throughout your IT environment that may also provide beneficial context to your security monitoring efforts. For example, Unified Threat Management (UTM) devices which combine firewall, NIDS/NIPS and other capabilities onto a single device can be monitored to detect similar events as standalone firewalls and NIDS/NIPS devices.

By monitoring the assets that provide the highest value security context, you can optimize security monitoring efforts. Doing so will provide faster, more accurate detection of threats while making the most of your security resources. For additional information on monitoring security events and other security topics, please visit theSecureWorks website.

 

Featured Gartner Research:

What Organizations are Spending on IT Security

According to research and advisory firm Gartner Inc., “Many CIOs and chief information security officers (CISOs) are uncertain about what is a ‘normal’ level of security spending in terms of a percentage of the overall IT budget – especially during economic uncertainty.” This research note will help IT managers understand how organizations are investing in their information security and compare their spending with that of their peers.

View the complimentary Gartner report made available to you by SecureWorks.

 

Security 101: Web Application Firewalls

What is a Web Application Firewall?
A web application firewall (WAF) is a tool designed to protect externally-facing web applications used for online banking, Internet retail sales, discussion boards and many other functions from application layer attacks such as cross-site scripting (XSS), cross-site request forgery (XSRF) and SQL injection. Because web application attacks exploit flaws in application logic that is often developed internally, each attack is unique to its target application. This makes it difficult to detect and prevent application layer attacks using existing defenses such as network firewalls and NIDS/NIPS.

How do WAFs Work?
WAFs utilize a set of rules or policies to control communications to and from a web application. These rules are designed to block common application layer attacks. Architecturally, a WAF is deployed in front of an application to intercept communications and enforce policies before they reach the application.

What are the Risks of Deploying a WAF?

Depending on the importance of the web application to your business, the risk of experiencing false positives that interrupt legitimate communications can be a concern. To provide sound protection with minimal false positives, WAF rules and policies must be tailored to the application(s) the WAF is defending. In many cases, this requires significant up-front customization based on in-depth knowledge of the application in question. This effort must also be maintained to address modifications to the application over time.

What are the Benefits of Deploying a WAF?

A WAF can be beneficial in terms of both security and compliance. Applications are a prime target for today’s hackers. Also, the Payment Card Industry (PCI) Data Security Standard requires companies who process, store or transmit payment card data to protect their externally-facing web applications from known attacks (Requirement 6.6). If managed properly and used in conjunction with regular application code reviews, vulnerability testing and remediation, WAFs can be a solid option for protecting against web application attacks and satisfying related compliance requirements.

 

Reference: http://www.secureworks.com/resources/newsletter/2008-07/

NIDS (Network Intrusion Detection System) and NIPS (Network Intrusion Prevention System)

NIDS and NIPS (Behavior based, signature based, anomaly based, heuristic)

An intrusion detection system (IDS) is software that runs on a server or network device to monitor and track network activity. By using an IDS, a network administrator can configure the system to monitor network activity for suspicious behavior that can indicate unauthorized access attempts. IDSs can be configured to evaluate system logs, look at suspicious network activity, and disconnect sessions that appear to violate security settings.

IDSs can be sold with firewalls. Firewalls by themselves will prevent many common attacks, but they don’t usually have the intelligence or the reporting capabilities to monitor the entire network. An IDS, in conjunction with a firewall, allows both a reactive posture with the firewall and a preventive posture with the IDS.

In response to an event, the IDS can react by disabling systems, shutting down ports, ending sessions, deception (redirect to honeypot), and even potentially shutting down your network. A network-based IDS that takes active steps to halt or prevent an intrusion is called a network intrusion prevention system (NIPS). When operating in this mode, they are considered active systems.

Passive detection systems log the event and rely on notifications to alert administrators of an intrusion. Shunning or ignoring an attack is an example of a passive response, where an invalid attack can be safely ignored. A disadvantage of passive systems is the lag between intrusion detection and any remediation steps taken by the administrator.

Intrusion prevention systems (IPS) like IDSs follows the same process of gathering and identifying data and behavior, with the added ability to block (prevent) the activity.

A network-based IDS examines network patters, such as an unusual number or requests destined for a particular server or service, such as an FTP server. Network IDS systems should be located as upfront as possible, e.g. on the firewall, a network tap, span port, or hub, to monitor external traffic. Host IDS systems on the other hand, are placed on individual hosts where they can more efficiently monitor internally generated events.

Using both network and host IDS enhances the security of the environment.

Snort is an example of a network intrusion detection and prevention system. It conducts traffic analysis and packet logging on IP networks. Snort uses a flexible rule-based language to describe traffic that it should collect or pass, and a modular detection engine.

Network based intrusion detection attempts to identify unauthorized, illicit, and anomalous behavior based solely on network traffic. Using the captured data, the Network IDS processes and flags any suspicious traffic. Unlike an intrusion prevention system, an intrusion detection system does not actively block network traffic. The role of a network IDS is passive, only gathering, identifying, logging and alerting.

Host based intrusion detection system (HIDS) attempts to identify unauthorized, illicit, and anomalous behavior on a specific device. HIDS generally involves an agent installed on each system, monitoring and alerting on local OS and application activity. The installed agent uses a combination of signatures, rules, and heuristics to identify unauthorized activity. The role of a host IDS is passive, only gathering, identifying, logging, and alerting. Tripwire is an example of a HIDS.

There are no fully mature open standards for ID at present. The Internet Engineering Task Force (IETF) is the body which develops new Internet standards. They have a working group to develop a common format for IDS alerts.

The following types of monitoring methodologies can be used to detect intrusions and malicious behavior: signature, anomaly, heuristic and rule-based monitoring.

A signature based IDS will monitor packets on the network and compare them against a database of signatures or attributes from known malicious threats. This is similar to the way most antivirus software detects malware. The issue is that there will be a lag between a new threat being discovered in the wild and the signature for detecting that threat being applied to your IDS.

A network IDS signature is a pattern that we want to look for in traffic. Signatures range from very simple – checking the value of a header field – to highly complex signatures that may actually track the state of a connection or perform extensive protocol analysis.

An anomaly-based IDS examines ongoing traffic, activity, transactions, or behavior for anomalies (things outside the norm) on networks or systems that may indicate attack. An IDS which is anomaly based will monitor network traffic and compare it against an established baseline. The baseline will identify what is “normal” for that network, what sort of bandwidth is generally used, what protocols are used, what ports and devices generally connect to each other, and alert the administrator when traffic is detected which is anomalous to the baseline.

A heuristic-based security monitoring uses an initial database of known attack types but dynamically alters their signatures base on learned behavior of network traffic. A heuristic system uses algorithms to analyze the traffic passing through the network. Heuristic systems require more fine-tuning to prevent false positives in your network.

A behavior-based system looks for variations in behavior such as unusually high traffic, policy violations, and so on. By looking for deviations in behavior, it is able to recognize potential threats and respond quickly.
Similar to firewall access control rules, a rule-based security monitoring system relies on the administrator to create rules and determine the actions to take when those rules are transgressed.

References:
http://netsecurity.about.com/cs/hackertools/a/aa030504.htm
http://www.sans.org/security-resources/idfaq/
• CompTIA Security+ Study Guide: Exam SY0-301, Fifth Edition by Emmett Dulaney
• Mike Meyers’ CompTIA Security+ Certification Passport, Second Edition by T. J. Samuelle

http://neokobo.blogspot.com/2012/01/118-nids-and-nips.html

UI, UX: Designing functinality in Tech Industry

Design is a rather broad and vague term. When someone says “I’m a designer,” it is not immediately clear what they actually do day to day. There are a number of different responsibilities encompassed by the umbrella term designer.

Design-related roles exist in a range of areas from industrial design (cars, furniture) to print (magazines, other publications) to tech (websites, mobile apps). With the relatively recent influx of tech companies focused on creating interfaces for screens, many new design roles have emerged. Job titles like UX or UI designer are confusing to the uninitiated and unfamiliar even to designers who come from other industries.

Let’s attempt to distill what each of these titles really mean within the context of the tech industry.

UX DESIGNER (USER EXPERIENCE DESIGNER)

UX designers are primarily concerned with how the product feels. A given design problem has no single right answer. UX designers explore many different approaches to solving a specific user problem. The broad responsibility of a UX designer is to ensure that the product logically flows from one step to the next. One way that a UX designer might do this is by conducting in-person user tests to observe one’s behavior. By identifying verbal and non-verbal stumbling blocks, they refine and iterate to create the “best” user experience. An example project is creating a delightful onboarding flow for a new user.

“Define interaction models, user task flows, and UI specifications. Communicate scenarios, end-to-end experiences, interaction models, and screen designs to stakeholders. Work with our creative director and visual designers to incorporate the visual identity of Twitter into features. Develop and maintain design wireframes, mockups, and specifications as needed.”

Experience Designer job description at Twitter

Example of an app’s screens created by a UX designer.Credit: Kitchenware Pro Wireframe Kit by Neway Lau on Dribbble.

Deliverables: Wireframes of screens, storyboards, sitemap

Tools of the trade: Photoshop, Sketch, Illustrator, Fireworks, InVision

You might hear them say this in the wild
: “We should show users the ‘Thank You’ page once they have finished signing up.”

UI DESIGNER (USER INTERFACE DESIGNER)

Unlike UX designers who are concerned with the overall feel of the product, user interface designers are particular about how the product is laid out. They are in charge of designing each screen or page with which a user interacts and ensuring that the UI visually communicates the path that a UX designer has laid out. For example, a UI designer creating an analytics dashboard might front load the most important content at the top, or decide whether a slider or a control knob makes the most intuitive sense to adjust a graph. UI designers are also typically responsible for creating a cohesive style guide and ensuring that a consistent design language is applied across the product. Maintaining consistency in visual elements and defining behavior such as how to display error or warning states fall under the purview of a UI designer.

“Concept and implement the visual language of Airbnb.com. Create and advance site-wide style guides.”

-UI Designer job description at Airbnb

The boundary between UI and UX designers is fairly blurred and it is not uncommon for companies to opt to combine these roles.

A UI designer defines the overall layout and look & feel of an app.Credit: Metro Style Interface 4 by Ionut Zamfir on Dribbble.

Tools of the trade: Photoshop, Sketch, Illustrator, Fireworks

You might hear them say this in the wild: “The login and sign up links should be moved to the top right corner.”

VISUAL DESIGNER (GRAPHIC DESIGNER)

A visual designer is the one who pushes pixels. If you ask a non-designer what a designer does, this is probably what comes to mind first. Visual designers are not concerned with how screens link to each other, nor how someone interacts with the product. Instead, their focus is on crafting beautiful icons, controls, and visual elements and making use of suitable typography. Visual designers sweat the small details that others overlook and frequently operate at the 4X to 8X zoom level in Photoshop.

“Produce high-quality visual designs—from concept to execution, including those for desktop, web, and mobile devices at a variety of resolutions (icons, graphics, and marketing materials). Create and iterate on assets that reflect a brand, enforce a language, and inject beauty and life into a product.”

Visual Designer job description at Google

It is also fairly common for UI designers to pull double duty and create the final pixel perfect assets. Some companies choose not to have a separate visual designer role.

A visual designer lays out guides and adjusts every single pixel to ensure that the end result is perfect.Credits: iOS 7 Guide Freebie PSD by Seevi kargwal on Dribbble.

Tools of the trade: Photoshop, Sketch

You might hear them say this in the wild: “The kerning is off and the button should be 1 pixel to the left!”

INTERACTION DESIGNER (MOTION DESIGNER)

Remember the subtle bouncing animation when you pull to refresh in the Mail app on your iPhone? That’s the work of a motion designer. Unlike visual designers who usually deal with static assets, motion designers create animation inside an app. They deal with what the interface does after a user touches it. For example, they decide how a menu should slide in, what transition effects to use, and how a button should fan out. When done well, motion becomes an integral part of the interface by providing visual clues as to how to use the product.

“Proficiency in graphic design, motion graphics, digital art, a sensitivity to typography and color, a general awareness of materials/textures, and a practical grasp of animation. Knowledge of iOS, OS X, Photoshop and Illustrator as well as familiarity with Director (or equivalent), Quartz Composer (or equivalent), 3D computer modeling, motion graphics are required.”

-Interaction Designer job description at Apple

Tools of the trade: AfterEffects, Core Composer, Flash, Origami

You might hear them say this in the wild:”The menu should ease-in from the left in 800ms.”

UX RESEARCHER (USER RESEARCHER)

A UX researcher is the champion of a user’s needs. The goal of a researcher is to answer the twin questions of “Who are our users?” and “What do our users want?” Typically, this role entails interviewing users, researching market data, and gathering findings. Design is a process of constant iteration. Researchers may assist with this process by conducting A/B tests to tease out which design option best satisfies user needs. UX researchers are typically mainstays at large companies, where the access to a plethora of data gives them ample opportunity to draw statistically significant conclusions.

“Work closely with product teams to identify research topics. Design studies that address both user behavior and attitudes. Conduct research using a wide variety of qualitative methods and a subset of quantitative methods, such as surveys.”

UX Researcher job description at Facebook

UX designers also occasionally carry out the role of UX researchers.

Deliverables: User personas, A/B test results, Investigative user studies & interviews

Tools of the trade: Mic, Paper, Docs

You might hear them say this in the wild: “From our research, a typical user…”

FRONT-END DEVELOPER (UI DEVELOPER)

Front-end developers are responsible for creating a functional implementation of a product’s interface. Usually, a UI designer hands off a static mockup to the front-end developer who then translates it into a working, interactive experience. Front-end developers are also responsible for coding the visual interactions that the motion designer comes up with.

Tools of the trade: CSS, HTML, JavaScript

You might hear them say this in the wild: “I’m using a 960px 12-column grid system.”

PRODUCT DESIGNER

Product designer is a catch-all term used to describe a designer who is generally involved in the creation of the look and feel of a product.

The role of a product designer isn’t well-defined and differs from one company to the next. A product designer may do minimal front-end coding, conduct user research, design interfaces, or create visual assets. From start to finish, a product designer helps identify the initial problem, sets benchmarks to address it, and then designs, tests, and iterates on different solutions. Some companies that want more fluid collaboration within the various design roles opt to have this title to encourage the whole design team to collectively own the user experience, user research, and visual design elements.

Some companies use “UX designer” or simply “designer” as a catch-all term. Reading the job description is the best way to figure out how the company’s design team divides the responsibilities.

“Own all facets of design: interaction, visual, product, prototyping. Create pixel-perfect mocks and code for new features across web and mobile.”

Product Designer job description at Pinterest

“I AM LOOKING FOR A DESIGNER”

This is the single most common phase I hear from new startups. What they are usually looking for is someone who can do everything described above. They want someone who can make pretty icons, create A/B tested landing sites, logically arrange UI elements on screen, and maybe even do some front-end development. Due to the broad sweeping scope of this role, we usually hear smaller companies asking to hire a “designer” rather than being specific in their needs.

The boundaries between each of these various design roles are very fluid. Some UX designers are also expected to do interaction design, and often UI designers are expected to push pixels as well. The best way to look for the right person is to describe what you expect the designer to do within your company’s process, and choose a title that best represents the primary task of that person.

OWL (Web Ontology Language)

1 Introduction

The Semantic Web is a vision for the future of the Web in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the Web. The Semantic Web will build on XML’s ability to define customized tagging schemes [XML] and RDF’s flexible approach to representing data[RDF Concepts]. The next element required for the Semantic Web is a web ontology language which can formally describe the semantics of classes and properties used in web documents. In order for machines to perform useful reasoning tasks on these documents, the language must go beyond the basic semantics of RDF Schema [RDF Vocabulary].

This document is one part of the specification of OWL, the Web Ontology Language. The Document Roadmap section of the OWL Overview document describes each of the other documents. This document enumerates the requirements of a web ontology language as perceived by the working group. However, it is expected that future languages will extend OWL, adding, among other things, greater logical capabilities and the ability to establish trust on the Semantic Web.

We motivate the need for a web ontology language by describing six use cases. Some of these use cases are based on efforts currently underway in industry and academia, others demonstrate more long-term possibilities. The use cases are followed by design goals that describe high-level objectives and guidelines for the development of the language. These design goals will be considered when evaluating proposed features. The section on Requirements presents a set of features that should be in the language and gives motivations for those features. The Objectives section describes a list of features that might be useful for many use cases but may not necessarily be addressed by the working group.

The Web Ontology Working Group charter tasks the group to produce this more expressive semantics and to specify mechanisms by which the language can provide “more complex relationships between entities including: means to limit the properties of classes with respect to number and type, means to infer that items with various properties are members of a particular class, a well-defined model of property inheritance, and similar semantic extensions to the base languages.” The detailed specification of the web ontology language will take into consideration:

  • the design goals and requirements that are contained in this document
  • review comments on this document from public feedback, invited experts and working group members
  • specifications of or proposals for languages that meet many of these requirements

1.1 What is an ontology?

An ontology defines the terms used to describe and represent an area of knowledge. Ontologies are used by people, databases, and applications that need to share domain information (a domain is just a specific subject area or area of knowledge, like medicine, tool manufacturing, real estate, automobile repair, financial management, etc.). Ontologies include computer-usable definitions of basic concepts in the domain and the relationships among them (note that here and throughout this document, definition is not used in the technical sense understood by logicians). They encode knowledge in a domain and also knowledge that spans domains. In this way, they make that knowledge reusable.

The word ontology has been used to describe artifacts with different degrees of structure. These range from simple taxonomies (such as the Yahoo hierarchy), to metadata schemes (such as the Dublin Core), to logical theories. The Semantic Web needs ontologies with a significant degree of structure. These need to specify descriptions for the following kinds of concepts:

  • Classes (general things) in the many domains of interest
  • The relationships that can exist among things
  • The properties (or attributes) those things may have

Ontologies are usually expressed in a logic-based language, so that detailed, accurate, consistent, sound, and meaningful distinctions can be made among the classes, properties, and relations. Some ontology tools can perform automated reasoning using the ontologies, and thus provide advanced services to intelligent applications such as: conceptual/semantic search and retrieval, software agents, decision support, speech and natural language understanding, knowledge management, intelligent databases, and electronic commerce.

Ontologies figure prominently in the emerging Semantic Web as a way of representing the semantics of documents and enabling the semantics to be used by web applications and intelligent agents. Ontologies can prove very useful for a community as a way of structuring and defining the meaning of the metadata terms that are currently being collected and standardized. Using ontologies, tomorrow’s applications can be “intelligent,” in the sense that they can more accurately work at the human conceptual level.

Ontologies are critical for applications that want to search across or merge information from diverse communities. Although XML DTDs and XML Schemas are sufficient for exchanging data between parties who have agreed to definitions beforehand, their lack of semantics prevent machines from reliably performing this task given new XML vocabularies. The same term may be used with (sometimes subtle) different meaning in different contexts, and different terms may be used for items that have the same meaning. RDF and RDF Schema begin to approach this problem by allowing simple semantics to be associated with identifiers. With RDF Schema, one can define classes that may have multiple subclasses and super classes, and can define properties, which may have sub properties, domains, and ranges. In this sense, RDF Schema is a simple ontology language. However, in order to achieve interoperation between numerous, autonomously developed and managed schemas, richer semantics are needed. For example, RDF Schema cannot specify that the Person and Car classes are disjoint, or that a string quartet has exactly four musicians as members.

One of the goals of this document is to specify what is needed in a web ontology language. These requirements will be motivated by potential use cases and general design objectives that take into account the difficulties in applying the standard notion of ontologies to the unique environment of the Web.

1.2 Why OWL?

The Semantic Web is a vision for the future of the Web in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the Web. The Semantic Web will build on XML’s ability to define customized tagging schemes and RDF’s flexible approach to representing data. The first level above RDF required for the Semantic Web is an ontology language what can formally describe the meaning of terminology used in Web documents. If machines are expected to perform useful reasoning tasks on these documents, the language must go beyond the basic semantics of RDF Schema. The OWL Use Cases and Requirements Document provides more details on ontologies, motivates the need for a Web Ontology Language in terms of six use cases, and formulates design goals, requirements andobjectives for OWL.

OWL has been designed to meet this need for a Web Ontology Language. OWL is part of the growing stack of W3C recommendations related to the Semantic Web.

  • XML provides a surface syntax for structured documents, but imposes no semantic constraints on the meaning of these documents.
  • XML Schema is a language for restricting the structure of XML documents and also extends XML with datatypes.
  • RDF is a datamodel for objects (“resources”) and relations between them, provides a simple semantics for this datamodel, and these datamodels can be represented in an XML syntax.
  • RDF Schema is a vocabulary for describing properties and classes of RDF resources, with a semantics for generalization-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. “exactly one”), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.

1.3 The three sublanguages of OWL

OWL provides three increasingly expressive sublanguages designed for use by specific communities of implementers and users.

  • OWL Lite supports those users primarily needing a classification hierarchy and simple constraints. For example, while it supports cardinality constraints, it only permits cardinality values of 0 or 1. It should be simpler to provide tool support for OWL Lite than its more expressive relatives, and OWL Lite provides a quick migration path for thesauri and other taxonomies. Owl Lite also has a lower formal complexity than OWL DL, see the section on OWL Lite in the OWL Reference for further details.
  • OWL DL supports those users who want the maximum expressiveness while retaining computational completeness (all conclusions are guaranteed to be computable) and decidability (all computations will finish in finite time). OWL DL includes all OWL language constructs, but they can be used only under certain restrictions (for example, while a class may be a subclass of many classes, a class cannot be an instance of another class). OWL DL is so named due to its correspondence with description logics, a field of research that has studied the logics that form the formal foundation of OWL.
  • OWL Full is meant for users who want maximum expressiveness and the syntactic freedom of RDF with no computational guarantees. For example, in OWL Full a class can be treated simultaneously as a collection of individuals and as an individual in its own right. OWL Full allows an ontology to augment the meaning of the pre-defined (RDF or OWL) vocabulary. It is unlikely that any reasoning software will be able to support complete reasoning for every feature of OWL Full.

Each of these sublanguages is an extension of its simpler predecessor, both in what can be legally expressed and in what can be validly concluded. The following set of relations hold. Their inverses do not.

  • Every legal OWL Lite ontology is a legal OWL DL ontology.
  • Every legal OWL DL ontology is a legal OWL Full ontology.
  • Every valid OWL Lite conclusion is a valid OWL DL conclusion.
  • Every valid OWL DL conclusion is a valid OWL Full conclusion.

Ontology developers adopting OWL should consider which sublanguage best suits their needs. The choice between OWL Lite and OWL DL depends on the extent to which users require the more-expressive constructs provided by OWL DL. The choice between OWL DL and OWL Full mainly depends on the extent to which users require the meta-modeling facilities of RDF Schema (e.g. defining classes of classes, or attaching properties to classes). When using OWL Full as compared to OWL DL, reasoning support is less predictable since complete OWL Full implementations do not currently exist.

OWL Full can be viewed as an extension of RDF, while OWL Lite and OWL DL can be viewed as extensions of a restricted view of RDF. Every OWL (Lite, DL, Full) document is an RDF document, and every RDF document is an OWL Full document, but only some RDF documents will be a legal OWL Lite or OWL DL document. Because of this, some care has to be taken when a user wants to migrate an RDF document to OWL. When the expressiveness of OWL DL or OWL Lite is deemed appropriate, some precautions have to be taken to ensure that the original RDF document complies with the additional constraints imposed by OWL DL and OWL Lite. Among others, every URI that is used as a class name must be explicitly asserted to be of type owl:Class (and similarly for properties), every individual must be asserted to belong to at least one class (even if only owl:Thing), the URI’s used for classes, properties and individuals must be mutually disjoint. The details of these and other constraints on OWL DL and OWL Lite are explained in appendix E of the OWL Reference.

 

2 Protégé (knowledge-based applications with ontologies)

Protégé is a free, open-source platform that provides a growing user community with a suite of tools to construct domain models and knowledge-based applications with ontologies.

Reference: http://www.w3.org/TR/webont-req/#onto-defhttp://www.w3.org/TR/owl-features/http://protege.stanford.edu/

xCode to Custom Keyboard in iOS

A custom keyboard replaces the system keyboard for users who want capabilities such as a novel text input method or the ability to enter text in a language not otherwise supported in iOS. The essential function of a custom keyboard is simple: Respond to taps, gestures, or other input events and provide text, in the form of an unattributed NSString object, at the text insertion point of the current text input object.

After a user chooses a custom keyboard, it becomes the keyboard for every app the user opens. For this reason, a keyboard you create must, at minimum, provide certain base features. Most important, your keyboard must allow the user to switch to another keyboard.

Understand User Expectations for Keyboards

To understand what users expect of your custom keyboard, study the system keyboard—it’s fast, responsive, and capable. And it never interrupts the user with information or requests. If you provide features that require user interaction, add them not to the keyboard but to your keyboard’s containing app.

Keyboard Features That iOS Users Expect

There is one feature that iOS users expect and that every custom keyboard must provide: a way to switch to another keyboard. On the system keyboard, this affordance appears as a button called the globe key. iOS 8 provides specific API for your “next keyboard” control, described in Providing a Way to Switch to Another Keyboard.

The system keyboard presents an appropriate key set or layout based on the UIKeyboardType trait of the current text input object. With the insertion point in the To: field in Mail, for example, the system keyboard period key changes: When you press and hold that key, you can pick from among a set of top-level domain suffixes. Design your custom keyboard with keyboard type traits in mind.

iOS users also expect autocapitalization: In a standard text field, the first letter of a sentence in a case-sensitive language is automatically capitalized.

These features and others are listed next.

  • Appropriate layout and features based on keyboard type trait

  • Autocorrection and suggestion

  • Automatic capitalization

  • Automatic period upon double space

  • Caps lock support

  • Keycap artwork

  • Multistage input for ideographic languages

You can decide whether or not to implement such features; there is no dedicated API for any of the features just listed, so providing them is a competitive advantage….more

Article with example: http://www.appdesignvault.com/ios-8-custom-keyboard-extension/

Reference: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/Keyboard.html