How to build a FREE Private Cloud using Microsoft Technologies….

This document will guide you through the process of setting up the bare minimum components to demo a Private Cloud environment using current release versions of Microsoft products and technologies. It is NOT meant for nor is it an ideal configuration for use in a production environment. If you have a Technet or MSDN subscription then you have all the software you need already. Otherwise you can download FREE TRIAL versions of all the necessary components from the Microsoft Technet Evaluation Center.

Once the installation and configuration are complete, you will be able to demo the use of System Center Virtual Machine Manager and the SCVMM Self Service Portal 2.0 to build and manage a Private Cloud. With additional software and hardware resources, this configuration can be expanded to include additional System Center Technologies to demonstrate a much broader Private Cloud implementation including monitoring, reporting, change management, deployment and more. There are free trial versions of all the System Center products at the Microsoft Technet Evaluation Center.

There is an assumption that you have at least a basic knowledge of the roles and services in Windows 2008 R2, a cursory knowledge of how to install SQL Server 2008 R2, and a basic understanding of how the System Center Virtual Machine Manager works. Additional documents and walkthroughs may be produced for more detail. If there is something you would like to have more information on, please comment to this blog post and let me know.

If you plan on doing this in a single sitting, bring plenty of your favorite caffeinated beverage, some good music to listen to, maybe even a good book, and lot of patience. There is a lot of “hurry up and wait” that takes place during this setup. Expect to spend 6-10 hours depending on how fast your hardware is and how efficient you are. This guide could be condensed even further to combine certain steps and reduce setup time slightly but I have opted to make it as fool proof as possible. If you follow this guide exactly, you should not see any errors or failures during the installation.

The resultant demo configuration does not provide for any failover or redundancy and is intended solely as a lightweight demo/test/learning environment. The concepts here can be used as a template to install a production Private Cloud, but please, do not implement this configuration in production without speaking to the appropriate persons that administer your network. If you implement this in production, you do so at your own risk and you should have an updated resume available.

Architecture:
Host Machine – Windows Server 2008 R2 + SP1 + all post SP1 Updates

Roles: Active Directory Domain Services, DNS Server, Hyper-V, Web Server (IIS)

Software: SQL Server 2008 R2 x64, System Center Virtual Machine Manager 2008 R2 Server Components and Administrator Console, SCVMM Self Service Portal 2.0

Guest VM’s – Once this install is complete, you can create whatever guest VM’s you like to use for testing and demoing. In a future document I will detail a list of resources you may wish to create so you have a relevant test and demo environment.

Hardware Requirements:

I personally recommend using a desktop computer because of the drive options available. However, a high-end laptop can be used. I have performed this install to both hardware platforms in the following configurations:

Laptop: Lenovo W510 (quad processor + hyper-threading), 16gigs RAM, (1) 7200rpm SATA drive for host operating system, (1) 140gig Solid State Drive for guest VM storage

***This is the platform I used when creating this document***

Pros: Compact, very portable

Cons: Disk I/O and potential CPU bottlenecks decreases performance. This can be mitigated by investing is higher end disk drive and/or a laptop with greater processing capabilities but increases the cost dramatically. Overall a more expensive solution even with lower end components.

Desktop: Quad-processor CPU, 16gigs RAM, (1) 7200rpm for host operating system, (2 or more ) 7200rpm+ SATA drives for guest VM storage (these drives can be striped as RAID-0 for additional performance *or* they can be formatted independently and place guest VM’s on separate spindles. For my desktop implementation at home I am using the RAID-0 option)

Pros: Better performance due to disk drive configuration options. Lower cost of desktop PC components make this a less expensive solution even with higher end hardware.

Cons: More of a fixed solution, less portable. Could potentially use an ultra-mini case or small “media center” type case to increase portability, however, desk top components are not designed to be moved around a lot so you are at a higher risk of component failure.

I also *highly recommend* a high capacity dedicated external storage device for backup up configurations along the way. The entirety of this private cloud configuration is relatively simple but the overall process is time consuming. The more frequently you backup/snapshot at key stages the less likely you will be to spend rebuilding from scratch.

Software Requirements:

If you have a Technet or MSDN subscription you have everything you need. If you do not have a Technet or MSDN subscription you can use free trial software for everything. Just be mindful of the individual timebombs and make note of when things expire. Using the pieces below you should be able to run for 180 days from the day the Host machine OS is installed.

Windows Server 2008 R2 with SP1 Trial

System Center Virtual Machine Manager 2008 R2 with SP1 Trial

Microsoft SQL Server 2008 R2 Trial (get the 64bit version)

Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 with SP1

Suggested Pre-Reading/Learning:

An assumption is being made that you are familiar with installing and configuring Windows Server 2008 R2 and its related Roles and Features. If not, then you should bookmark and leverage the following –

Microsoft Technet Windows Server TechCenter

Additional Resources:

Microsoft SQL Server 2008 R2 TechCenter

System Center Virtual Machine Manager 2008 R2 TechCenter

System Center Virtual Machine Manager Self-Service Portal 2.0 TechCenter

The Heavy Lifting – Installing the components

This section of the guide will walk you through the installation of each and every piece of the Microsoft Private Cloud solution. I have chosen an abbreviated rapid fire approach to this install. There are no screen shots. I do not go into detail around the choices made on the selection screens. If the options on a screen are not discussed in the document, you can assume the default selections will suffice.

There is a lot of opportunity to customize things along the way. There is a lot of opportunity to poke around and make changes during setup or while waiting on files to copy. I recommend that you NOT do this if you can avoid it. This document should provide a 100% success rate with ZERO errors during install if you follow it exactly. If you choose to stray and make changes during the install, you do so at the risk of your own time invested in this process.

Grab that caffeinated beverage. Take a big sip. Start your music. Take a deep breath. Here we go….

Install the Hyper-V Host

Windows Server 2008 R2 is the foundation up which we build the entire private cloud. The leverage the built-in Hyper-V hypervisor for virtualizing the servers, clients and their applications that can then be served up through the self-service portal. It is absolutely essential that the base server is installed properly and is 100% stable.

Pre-install hardware configuration – Ensure that you have enabled virtualization support in the BIOS of your computer. How this is managed/enabled depends on the PC Manufacturer and the BIOS used. You should also make sure the Data Execution Prevention (DEP) is active. There is a great blog post that talks about how to do this here — http://blogs.technet.com/b/iftekhar/archive/2010/08/09/enable-hardware-settings-in-bios-to-run-hyper-v.aspx

*I recommend rebooting after each line item below*

Install Windows 2008 R2

Install any BIOS updates/hardware drivers/manufacturer updates for your system

Install SP1 (can be skipped if you installed Windows 2008 R2 + SP1 integrated)

Install all post-SP1 updates from Windows Update

*after each update install completes, reboot and run

Windows Update until no further updates are offered*

Optional – Rename host to desired friendly name

Install Necessary Windows Server Roles and Features

Add the Role: Active Directory Domain Services

Run the Active Directory Domain Services installation wizard (dcpromo.exe)

Create a new domain in a new forest

Supply FQDN of the new forest root domain (ie; privatecloud.local)

Supply Domain NetBIOS name (ie; PRIVATECLOUD)

Select Forest Functional Level (Windows 2003 is fine)

Select Domain Functional Level (Windows 2003 is fine)

Allow DNS to be installed (Assign Static IP if necessary)

(***I assigned a static IP address/mask for my local subnet and pointed to my default gateway. I then configured DNS with forwarders of 4.2.2.1 and 4.2.2.2 – These are AT&T’s public DNS servers. This allows for Internet access to download Windows Updates or other software needed***)

Location for Database, Log Files, SYSVOL = Default

Assign Password for Directory Services Restore Mode

Complete Wizard and Reboot

Add the Role: Hyper-V

Create Virtual Network: Attach to local Ethernet

Complete Wizard and Reboot

Allow Wizard to Complete and Reboot

Install Web Server (IIS) Role

IIS is required by the Self Service Portal 2.0. The portal also requires specific Web Server (IIS) role services and the Message Queueing Feature to be enabled.

Add the Role: (Web Server IIS) – Next

Role Services – Select:

Static Content

Default Document

ASP.NET

.NET Extensibility

ISAPI Extensions

ISAPI Filters

Request Filtering

Windows Authentication

IIS6 Metabase Compatibility

Confirmation – Install

Add the Feature: Message Queueing – Next

Confirmation – Install

Windows Server 2008 R2 Foundation is now complete!

The Windows Server 2008 R2 + Hyper-V host is now complete. There are a few (not really) optional steps below you may wish to take just for your own sanity.

Optional (recommended) – Install Windows Server Backup Features

Optional (recommended) – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice)

Install SQL Server 2008 R2

SQL Server 2008 R2 is used for storing configuration information for System Center Virtual Machine Manager and the SCVMM Self-Service Portal. You do not need to be a SQL guru to get things up and running or even for day to day operations. You can pretty much forget about SQL except for routine patching. The exception to this (there are always exceptions) is if you use this document to implement a Private Cloud in a production environment using an existing production SQL Server. In that case, I beg you to speak to your SQL Admin *BEFORE* doing anything with SQL. You have been warned.

Launch SQL setup

New Installation or add features to an existing installation

Enter Product key or Specify a free edition

Accept License

Setup Support Files – Install

Setup Support Rules – Address any issues – Next

SQL Server Feature Installation – Next

Feature Selection – Select

Database Engine Services

Management Tools Basic

Default paths – Next

Installation Rules – Next

Default Instance (MSSQLSERVER) – Next

Disk Space Requirements – Next

Use the same account for all SQL server services

(if this host will be connecting to a network or the Internet then I suggest following SQL security guidelines and create unique accounts for each service. If you will only be using this for non-Network connected demonstrations, you can use the domainname\Administrator account for simplicity)

Supply credentials – Next

Windows authentication mode – Add current user – domainname\Administrator – Next

Error Reporting – Your choice – Next

Installation Configuration Rules – Next

Ready to Install – Summary – Install

Complete – Close

Windows Update – Check for Updates – Install – Reboot

(This one takes quite a while. Go get something to eat.)

Install System Center Virtual Machine Manager R2 + SP1

VMM Server Component

Start SCVMM Setup – Setup – VMM Server

Accept License – Next

CEIP – Your choice – Next

Product Registration – Fill in – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

SQL Server Settings – Use a supported version of SQL Server:

Server name: <name of localhost>

Check – Use the following credentials:

User name: <domain>\Administrator

Password: <password>

Select or enter a SQL instance: Drop down to MSSQLSERVER

Select or enter a database:  <enter a database name; ie; SCVMMDB>

Check – Create a new database

Library Share Settings

Create a new library share – Defaults are fine – Next

Installation Settings

Ports – Defaults are fine

VMM Server Account – Local System is fine – Next

Summary of Settings – Install

Install the VMM Administrator Console

Once the Virtual Machine Manager Administrator Console is installed, this will become the primary interface used when dealing with your virtualization infrastructure. There will be times you will want or need to go back to the standard Hyper-V MMC, but you should get comfortable with the SCVMM Administrator console for day-to-day operations.

Start SCVMM Setup – Setup – VMM Administrator Console

Accept License – Next

CEIP – Your choice – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

Port – 8100 – Default is fine

Summary of Settings – Install

Windows Update – Check for Updates – Install – Reboot

Take a deep breath. Switch from caffeine to ….something more calming. You are almost done.

Almost….

Install the SCVMM Self-Service Portal 2.0 with SP1

***Note – You probably noticed an option to install a Self Service Portal from with the SCVMM Setup interface.DO NOT INSTALL THIS VERSION. It is an older version and does not provide the most current functionality. Download the SSP 2.0 + SP1 version from the link in the “Software Requirements” section of this document.***

The Self-Service Portal is one of the defining features of the Microsoft Private Cloud. Through this portal, administrators can create resource pools consisting of networks, storage, load balancers, virtual machine templates and domains. Administrators can then create and manage business units who can use the self-service portal to requests these pools of resources and create them on demand.

Start SSP2.0 Setup

Getting Starter – (License page) – Accept – Next

Select

VMMSSP Server Component

VMMSSP Website Component

Next

Prerequisite Page – Should be all green – Next

VMMSSP Files – Default is fine – Next

Database Server: <localhost name>

Click – Get Instances

SQL Server Instance: Default

Credentials: Connect using Windows Authentication

Create a new Database or…..: Create a new database

Next

Provide an account for the server component

User Name: Administrator

Password: <password>

Domain: <domainname>

Test Account – Next

Configure WCF Endpoints – Defaults are fine – Next

Provide a list of Database Administrators

<domainname>\Administrator

Next

Configure the IIS web site for the VMMSSP website component

IIS website name:  VMMSSP <default>

Port Number:  81  <you cannot use 80 since it is assigned to the default web site>

Application pool name:  VMMSSPAppPool  <default>

User Name:  Administrator

Password :  <password>

Domain:  <domainname>

Next

Installation Summary – Install – Yes to Dialog

Close the final window.

Windows Update – Check for Updates – Install – Reboot

Once logged in:

Delete any setup files or unnecessary files/data you will not use for demonstration purposes

Empty the Recycle Bin

NOT OPTIONAL – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice). Trust me. At this point you have 6-10 hours invested in this setup and you do NOT want to have to start over.

You now have the hardware and software in place to demo a private cloud!

However, a Private Cloud is more about the HOW you use the infrastructure to create value, provide self-service, reduce overheard, automate resource creation and ultimately – reduce costs.

In the next document I produce, I will define a list of resources to create using the Hyper-V MMC, System Center Virtual Machine Manager, and the SCVMM Self-Service portal. I will then do a few recorded demos with these resources that you can customize for your own demonstration purposes.

Call To Action

Download a hard copy of this document for your own reference –

Bookmark my blog and watch for more posts and screen casts on Private Cloud. Here are some of the Planned Posts/Content/Screencasts I am working on:

Configuring Basic Resources for use in a Private Cloud

Creating virtual hard disks

Creating virtual machines

Creating templates in SCVMM

Creating Hardware and OS profiles in SCVMM

Configuring and using the Self-Service Portal 2.0

Initial Configuration

Creating and managing Infrastructures

Working with Virtual Machines

Managing User Roles and Business Units

Walking through the Request process

If there is a particular feature or process you would like to know more about, please contact me through a comment to this post or in email and we will discuss getting it produced.

For now, have fun playing with your new Private Cloud! (AFTER that bare metal recovery backup!)

Cheers!

Reference : http://blogs.technet.com/b/chrisavis/archive/2011/09/05/how-to-build-a-free-private-cloud-using-microsoft-technologies.aspx

How to build a private cloud

If you’re nervous about running your business applications on a public cloud, many experts recommend that you take a spin around a private cloud first.

Cloud: Ready or not?

But building and managing a cloud within your data center is not just another infrastructure project, says Joe Tobolski, director of cloud computing at Accenture.

“A number of technology companies are portraying this as something you can go out and buy – sprinkle a little cloud-ulator powder on your data center and you have an internal cloud,” he says. “That couldn’t be further from the truth.”

An internal, on-premise private cloud is what leading IT organizations have been working toward for years. It begins with data center consolidation, rationalization of OS, hardware and software platforms, and virtualization up and down the stack – servers, storage and network, Tobolski says.

Elasticity and pay-as-you-go pricing are guiding principles, which imply standardization, automation and commoditization of IT, he adds.

And it goes way beyond about infrastructure and provisioning resources, Tobolski adds. “It’s about the application build and the user’s experience with IT, too.”

Despite all the hype, we’re at a very early stage when it comes to internal clouds. According to Forrester Research, only 5% of large enterprises globally are even capable of running an internal cloud, with maybe half of those actually having one, says James Staten, principal analyst with the firm.

But if you’re interested in exploring private cloud computing, here’s what you need to know.

managing cloud computing

First steps: Standardization, automation, shared resources

Forrester’s three tenets for building an internal cloud are similar to Accenture’s precepts for next-generation IT.

To build an on-premises cloud, you must have standardized – and documented — procedures for operating, deploying and maintaining that cloud environment, Staten says.

Most enterprises are not nearly standardized enough, although companies moving down the IT Information Library (ITIL) path for IT service management are closer to this objective than others, he adds.

Standardized operating procedures that allow efficiency and consistency are critical for the next foundational layer, which is automation. “You have to be trusting of and a big-time user of automation technology,” Staten says. “That’s usually a big hurdle for most companies.”

Automating deployment is probably the best place to start because that enables self-service capabilities. And for a private cloud, this isn’t Amazon-style in which any developer can deploy virtual machines (VM) at will. “That’s chaos in a corporation and completely unrealistic,” Staten says.

Rather, for a private cloud, self-service means that an enterprise has established an automated workflow whereby resource requests go through an approvals process.

Once approved, the cloud platform automatically deploys the specified environment. More often, private cloud self-service is about developers asking for “three VMs of this size, a storage volume of this size and this much bandwidth,” Staten says. Self-service for end users seeking resources from the internal company cloud would be “I need a SharePoint volume or a file share.”

Thirdly, building an internal cloud means sharing resources – “and that usually knocks the rest of the companies off the list,” he says.

This is not about technology. “It’s organizational — marketing doesn’t want to share servers with HR, and finance won’t share with anybody. When you’re of that mindset, it’s hard to operate a cloud. Clouds are highly inefficient when resources aren’t shared,” Staten says.

Faced with that challenge, IT Director Marcos Athanasoulis has come up with a creative way to get participants comfortable with the idea of sharing resources on the Linux-based cloud infrastructure he oversees at Harvard Medical School (HMS) in Boston. It’s a contributed hardware approach, he says.

At HMS, which Athanasoulis calls the land of 1,000 CIOs, IT faces a bit of a unique challenge. It doesn’t have the authority to tell a lab what technology to use. It has some constraints in place, but if a lab wants to deploy its own infrastructure, it can. So when HMS approached the cloud concept four years ago, it did so wanting “a model where we could have capacity available in a shared way that the school paid for and subsidized so that folks with small needs could come in and get what they needed to get their research done but also be attractive to those labs that would have wanted to build their own high-performance computing or cloud environments if we didn’t offer a suitable alternative.”

With this approach, if a lab bought 100 nodes in the cloud, it got guaranteed access to that capacity. But if that capacity was idle, others’ workloads could run on it, Athanasoulis says.

“We told them – you own this hardware but if you let us integrate into the cloud, we’ll manage it for you and keep it updated and patched. But if you don’t like how this cloud is working, you can take it away.” He adds, “That turned out to be a good selling point, and not once [in four years] has anybody left the cloud.”

To support the contributed hardware approach, HMS uses Platform Computing’s Platform LSF workload automation software, Athanasoulis says. “The tool gives us the ability to set up queues and suspend jobs that are on the contributed hardware nodes, so that the people who own the hardware get guaranteed access and that suspended jobs get restored.”

Don’t proceed until you understand your services

If clouds are inefficient when resources aren’t shared, they can be outright pointless if services aren’t considered before all else. IBM, for example, begins every potential cloud engagement with an assessment of the different types of workloads and the risk, benefit and cost of moving each to different cloud models, says Fausto Bernardini, director IT strategy and architecture, cloud portfolio services, at IBM.

Whether a workload has affinity with a private, public or hybrid model depends on a number of attributes, including such key ones as compliance and security but others, too, such as latency and interdependencies of components in applications, he says.

Many enterprises think about building a private cloud from a product perspective before they consider services and service requirements – and that’s the exact opposite of where to start, says Tom Bittman, vice president and distinguished analyst at Gartner.

“If you’re really going to build a private cloud, you need to know what your services are, and what the [service-level agreements], costs and road maps are for each of those. This is really about understanding whether the services are going toward the cloud computing style or not,” he says.

Common services with relatively static interfaces, even if your business is highly reliant on them, are those you should be considering for cloud-style computing, private or public, Bittman says. E-mail is one example.

“I may use it a lot, but it’s not intertwined with the inner workings of my company. It’s the kind of service moving in the direction of interface and independence – I don’t want it to be integrated tightly with the company. I want to make it as separate as possible, easy to use, available from self-service interface,” Bittman says. “And if I’ve customized this type of service over time, I’ve got to undo that and make it as standard as possible.”

Conversely, services that define a business and are constantly the focus of innovative initiatives are not cloud contenders, Bittman says. “The goal for these services is intimacy and integration, and they are never going to the cloud. They may use cloud functions at a low level, like for raw compute, but the interface to the company isn’t going to be a cloud model.”

Only once you understand which services are right for the cloud and how long it might take you to get them to a public-readiness state will you be ready to build a business case and start to look at building a private cloud from a technology perspective, he says.

The final tiers: Service management and access management

Toward that end, Gartner has defined four tiers of components for building a private cloud.

At the bottom sits the resource tier comprising infrastructure, platforms or software. Raw virtualization comes to mind immediately, but VMs aren’t the only option – as long as you’ve got a mechanism for turning resources into a pool you’re on the way, Bittman says. Rapid re-provisioning technology is another option, for example.

Above the resource pool sits the resource management tier. “This is where I manage that pool in an automated manner,” says Bittman, noting that for VMware environments, this is about using VMware Distributed Resource Scheduler.

“These two levels are fairly mature,” Bittman says. “You can find products for these available in the market, although there’s not a lot of competition yet at the resource management tier.”

Next comes the service management tier. “This is where there’s more magic required,” he says. “I need something that lets me do service governance, something that lets me convert pools of resources into service levels. In the end, I need to be able to present to the user some kind of service-level interface that says ‘performance’ or ‘availability’ and have this services management tier for delivering on that.”

As you think about building your private cloud, understand that the gap between need and product availability is pretty big, Bittman says. “VMware, for example, does a really good job of allowing you to manage your virtualization pool, but it doesn’t know anything about services.VMware’s vCenter AppSpeed is one early attempt to get started on this,” he adds.

“What we really need is a good service governor, and that doesn’t exist yet,” says Bittman.

Sitting atop it all is the access management tier, which is all about the user self-service interface. “It presents a service catalog, and gives users all the knobs to turn and lets you manage subscribers,” Bittman says. “The interface has to be tied in some way to costing and chargeback, or at least metering – it ties to the service management tier at that level.”

Chargeback is a particularly thorny challenge for private cloud builders, but one that they can’t ignore for long. “It’s tricky from a technology perspective — what do I charge based on? But also from political and cultural perspectives,” Bittman says. “But frankly, if I’m going to move to cloud computing I’m going to move to a chargeback model so that’s going to be one of the barriers that needs to be broken anyways.”

In the end, it’s about the business

And while cloud-builders need to think in terms of elasticity, automation, self-service and chargeback, they shouldn’t be too rigid about the distinctions at this stage of cloud’s evolution, Bittman says. “We will see a lot of organizations doing pure cloud and a lot doing pure non-cloud, and a whole lot of stuff somewhere in the middle. What it all really comes down to is, ‘Is there benefit?'”

Wentworth-Douglass Hospital, in Dover, N.H., for example, is building what it calls a private cloud using a vBlock system from Cisco, EMC and VMware. But it’s doing so more with an eye toward abstraction of servers and not so much on the idea of self-provisioning or software-as-a-service (SaaS), says Scott Heffner, network operations manager for the hospital.

“Maybe we’ll get to SaaS eventually, and we are doing as much automation as we can, but I’m introducing concepts slowly to the organization because the cloud model is so advanced that to get the whole organization to conceive of and understand it right off the bat is too much,” he says

Reference : http://www.networkworld.com/supp/2010/ndc3/051010-ndc-cloud.html?page=3

Multiplexed Transport Layer Security

In information technology, the Transport Layer Security (TLS) protocol provides connection security with mutual authentication, data confidentiality and integrity, key generation and distribution, and security parameters negotiation. However, missing from the protocol is a way to multiplex application data over a single TLS session.

Multiplexed Transport Layer Security (MTLS) protocol is a new TLS sub-protocol running over TLS or DTLS. The MTLS design provides application multiplexing over a single TLS (or DTLS) session. Therefore, instead of associating a TLS connection with each application, MTLS allows several applications to protect their exchanges over a single TLS session.

MTLS is currently in draft stage http://tools.ietf.org/html/draft-badra-hajjeh-mtls-05 which expired in October 2009.

Reference : http://en.wikipedia.org/wiki/Multiplexed_Transport_Layer_Security

Near field communication

Near field communication (NFC) is a set of standards for smartphones and similar devices to establish radio communication with each other by touching them together or bringing them into close proximity, usually no more than a few centimetres. Present and anticipated applications include contactless transactions, data exchange, and simplified setup of more complex communications such as Wi-Fi. Communication is also possible between an NFC device and an unpowered NFC chip, called a “tag”.

NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards including ISO/IEC 14443 and FeliCa. The standards include ISO/IEC 18092 and those defined by the NFC Forum, which was founded in 2004 by NokiaPhilips and Sony, and now has more than 160 members. The Forum also promotes NFC and certifies device compliance.

Uses

N-Mark Logo for certified devices

NFC builds upon Radio-frequency identification (RFID) systems by allowing two-way communication between endpoints, where earlier systems such ascontactless smart cards were one-way only. Since unpowered NFC “tags” can also be read by NFC devices, it is also capable of replacing earlier one-way applications.

Commerce

NFC devices can be used in contactless payment systems, similar to those currently used in credit cards and electronic ticket smartcards, and allow mobile payment to replace or supplement these systems. For example, Google Wallet allows consumers to store credit card and store loyalty card information in a virtual wallet and then use an NFC-enabled device at terminals that also accept MasterCard PayPass transactions. GermanyAustriaLatvia[citation needed]and Italy have trialled NFC ticketing systems for public transport. China is using it all over the country in public bus transport[citation needed] and India is implementing NFC based transactions in box offices for ticketing purposes.

Uses of NFC include:

  • Matching encrypted security code and transporting access key;
  • Due to short transmission range, NFC-based transactions are possibly secure;
  • Instant payments and coupon delivery using your handset, as we do with your credit card or debit card;
  • Marketing and exchange of information such as schedules, maps, business card and coupon delivery using NFC Marketing tags;
  • Pay for items just by waving your phone over the NFC capable devices
  • Transferring images, posters for displaying and printing
  • Social media e.g Like on Facebook, Follow on Twitter via NFC smart stickers in retail stores

Bluetooth and WiFi connections

NFC offers a low-speed connection with extremely simple setup, and could be used to bootstrap more capable wireless connections. It could, for example, replace the pairing step of establishing Bluetooth connections or the configuration of Wi-Fi networks.

Social networking

NFC can be used in social networking situations, such as sharing contacts, photos, videos or files, and entering multiplayer mobile games.

Identity documents

The NFC Forum promotes the potential for NFC-enabled devices to act as electronic identity documents and keycards. As NFC has a short range and supports encryption, it may be more suitable than earlier, less private RFID systems.

Reference : http://en.wikipedia.org/wiki/Near_field_communication

Mobile and Web security will be major topics at Black Hat

Security researchers are expected to disclose new vulnerabilities in near field communication (NFC), mobile baseband firmware, HTML5 and Web application firewalls next week at theBlack Hat USA 2012 security conference.

Marking its 15th year, thousands of security enthusiasts and IT professionals flock to the annual Las Vegas conference to watch some of the industry’s top researchers present their latest findings.

With the rise of smartphones during the last few years, mobile technologies have become a major focus of security research — and for good reason. Many of today’s mobile phones are actually mini computers that store a wealth of sensitive data and this makes them attractive targets for attackers.

Some smartphone vendors have implemented NFC technology to enable contactless mobile payments. Users only have to wave their phones over NFC-capable devices to complete a transaction.

Renowned Apple hacker Charlie Miller, who works as a principal research consultant at security consulting firm Accuvant, has investigated the security of current NFC implementations and found ways in which the technology could be abused to force some mobile phones to parse files and open Web pages without user approval.

In some cases, attackers can take complete control of the phone through NFC, enabling them to steal photos and contacts, send text messages and make calls. Miller will present his findings in what is probably one of the most anticipated talks at this year’s U.S. edition of the conference.

In another mobile security presentation, University of Luxembourg researcher Ralf-Philipp Weinmann will discuss attacks against baseband processors — the phone microprocessors responsible for communicating with cellular networks.

Last year, Weinmann demonstrated how vulnerabilities in the firmware of baseband processors can be exploited to turn mobile phones into remote spying devices after tricking them into communicating with a rogue GSM base station — a scaled-down version of a cell phone tower. The base station had been set up using off-the-shelf hardware and open source software.

This year, Weinmann plans to show that rogue base stations are not even necessary to pull off such attacks, because some baseband vulnerabilities can be exploited over IP-based (Internet Protocol) connections.

If some components of the carrier network are configured in a certain way, a large number of smartphones can be attacked simultaneously, Weinmann said in the description of his presentation.

Mobile malware is viewed as a growing threat, particularly on the Android platform. To protect Android users and prevent malicious applications from being uploaded to Google Play, Google created an automated malware scanning service called Bouncer.

At Black Hat, Nicholas Percoco and Sean Schulte, security researchers from Trustwave, will reveal a technique that allowed them to evade Bouncer’s detection and keep a malicious app on Google Play for several weeks.

The initial app uploaded to Google Play was benign, but subsequent updates added malicious functionality to it, Percoco said. The end result was an app capable of stealing photos and contacts, forcing phones to visit websites and even launch denial-of-service attacks.

Percoco would not discuss the technique in detail ahead of the Black Hat presentation, but noted that it doesn’t require any user interaction. The malicious app is no longer available for download on Google Play and no users were affected during the tests, Percoco said.

Web attacks and vulnerabilities in new Web technologies will also be the subject of several Black Hat presentations this year.

Cybercriminals are increasingly relying on so-called drive-by download attacks to infect computers with malware by exploiting known vulnerabilities in widespread browser plug-ins like Java, Flash Player or Adobe Reader.

Jason Jones, a security researcher with HP DVLabs, Hewlett-Packard’s vulnerability research arm, is scheduled to present an analysis of some of the most commonly used Web exploit toolkits, like Blackhole or Phoenix.

Some of the trends observed by Jones in Web exploit toolkit development this year include an increased reliance on Java exploits and faster integration of exploits for new vulnerabilities.

In the past, Web exploit toolkits targeted vulnerabilities for which patches had been available for over six months or even a year. However, their creators are now integrating exploits for vulnerabilities that are a couple of months old or even unpatched by vendors, Jones said.

Reference : http://www.itworld.com/security/286827/mobile-and-web-security-will-be-major-topics-black-hat

Building a Website with PHP, MySQL and jQuery Mobile, Part 2

This is the second part of a two-part tutorial, in which we use PHP, MySQL and jQuery mobile to build a simple computer web store. In the previous part we created the models and the controllers, and this time we will be writing our views.

jQuery mobile

First, lets say a few words about the library we will be using. jQuery mobile is a user interface library that sits on top of jQuery and provides support for a wide array of devices in the form of ready to use widgets and a touch-friendly development environment. It is still in beta, but upgrading to the official 1.0 release will be as simple as swapping a CDN URL.

The library is built around progressive enhancement. You, as the developer, only need to concern yourself with outputting the correct HTML, and the library will take care of the rest. jQuery mobile makes use of the HTML5 data- attributes and by adding them, you instruct the library how it should render your markup.

In this tutorial we will be using some of the interface components that this library gives us – listsheaderand footer bars and buttons, all of which are defined using the data-role attributes, which you will see in use in the next section.

Rendering Views

The views are PHP files, or templates, that generate HTML code. They are printed by the controllers using the render() helper function. We have 7 views in use for this website – _category.php_product.php,_header.php_footer.phpcategory.phphome.php and error.php, which are discussed later on. First, here is render() function:

includes/helpers.php

01 /* These are helper functions */
02
03 function render($template,$vars array()){
04
05     // This function takes the name of a template and
06     // a list of variables, and renders it.
07
08     // This will create variables from the array:
09     extract($vars);
10
11     // It can also take an array of objects
12     // instead of a template name.
13     if(is_array($template)){
14
15         // If an array was passed, it will loop
16         // through it, and include a partial view
17         foreach($template as $k){
18
19             // This will create a local variable
20             // with the name of the object's class
21
22             $cl strtolower(get_class($k));
23             $$cl $k;
24
25             include "views/_$cl.php";
26         }
27
28     }
29     else {
30         include "views/$template.php";
31     }
32 }

The first argument of this function is the name of the template file in the views/ folder (without the .phpextension). The next is an array with arguments. These are extracted and form real variables which you can use in your template.

There is one more way this function can be called – instead of a template name, you can pass an array with objects. If you recall from last time, this is what is returned by using the find() method. So basically if you pass the result of Category::find() to render, the function will loop through the array, get the class names of the objects inside it, and automatically include the _category.php template for each one. Some frameworks (Rails for example) call these partials.

Computer Store with PHP, MySQL and jQuery Mobile

Computer Store with PHP, MySQL and jQuery Mobile

The Views

Lets start off with the first view – the header. You can see that this template is simply the top part of a regular HTML5 page with interleaved PHP code. This view is used in home.php and category.php to promote code reuse.

includes/views/_header.php

01 <!DOCTYPE html>
02 <html>
03     <head>
04     <title><?php echo formatTitle($title)?></title>
05
06     <meta name="viewport" content="width=device-width, initial-scale=1" />
07
08     <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0b2/jquery.mobile-1.0b2.min.css" />
09     <link rel="stylesheet" href="assets/css/styles.css" />
10     <script type="text/javascript" src="http://code.jquery.com/jquery-1.6.2.min.js"></script>
11     <script type="text/javascript" src="http://code.jquery.com/mobile/1.0b2/jquery.mobile-1.0b2.min.js"></script>
12 </head>
13 <body>
14
15 <div data-role="page">
16
17     <div data-role="header" data-theme="b">
18         <a href="./" data-icon="home" data-iconpos="notext" data-transition="fade">Home</a>
19         <h1><?php echo $title?></h1>
20     </div>
21
22     <div data-role="content">

In the head section we include jQuery and jQuery mobile from jQuery’s CDN, and two stylesheets. The body section is where it gets interesting. We define a div with the data-role=”page” attribute. This, along with the data-role=”content” div, are the two elements required by the library to be present on every page.

The data-role=”header” div is transformed into a header bar. The data-theme attribute chooses one of the 5 standard themes. Inside it, we have a link that is assigned a home icon, and has its text hidden. jQuery Mobile comes with a set of icons you can choose from.

The closing tags (and the footer bar) reside in the _footer.php view:

includes/views/_footer.php

1     </div>
2
3     <div data-role="footer" id="pageFooter">
4         <h4><?php echo $GLOBALS['defaultFooter']?></h4>
5     </div>
6 </div>
7
8 </body>
9 </html>

Nothing too fancy here. We only have a div with the data-role=”footer” attribute, and inside it we print the globally accessible $defaultFooter variable, defined in includes/config.php.

Neither of the above views are printed directly by our controllers. They are instead used bycategory.php and home.php:

includes/views/home.php

01 <?php render('_header',array('title'=>$title))?>
02
03 <p>Welcome! This is a demo for a ...</p>
04 <p>Remember to try browsing this ...</p>
05
06 <ul data-role="listview" data-inset="true" data-theme="c" data-dividertheme="b">
07     <li data-role="list-divider">Choose a product category</li>
08     <?php render($content) ?>
09 </ul>
10
11 <?php render('_footer')?>

If you may recall, the home view was rendered in the home controller. There we passed an array with all the categories, which is available here as $content. So what this view does, is to print the header, and footer, define a jQuery mobile listview (using the data-role attribute), and generate the markup of the categories passed by the controller, using this template (used implicitly by render()):

index.php/views/_category.php

1 <li <?php echo ($active == $category->id ? 'data-theme="a"' '') ?>>
2 <a href="?category=<?php echo $category->id?>" data-transition="fade">
3     <?php echo $category->name ?>
4     <span class="ui-li-count"><?php echo $category->contains?></span></a>
5 </li>

Notice that we have a $category PHP variable that points to the actual object this view is being generated for. This is done in lines 24/25 of the render function. When the user clicks one of the links generated by the above fragment, he will be taken to the /?category=someid url, which will show thecategory.php view, given below.

01 <?php render('_header',array('title'=>$title))?>
02
03 <div class="rightColumn">
04     <ul data-role="listview" data-inset="true" data-theme="c" data-dividertheme="c">
05         <?php render($products) ?>
06     </ul>
07 </div>
08
09 <div class="leftColumn">
10     <ul data-role="listview" data-inset="true" data-theme="c" data-dividertheme="b">
11         <li data-role="list-divider">Categories</li>
12         <?php render($categories,array('active'=>$_GET['category'])) ?>
13     </ul>
14 </div>
15
16 <?php render('_footer')?>

This file also uses the header, footer and _category views, but it also presents a column with products (passed by the category controller). The products are rendered using the _product.php partial:

1 <li class="product">
2     <img src="assets/img/<?php echo $product->id ?>.jpg" alt="<?php echo $product->name ?>" />
3     <?php echo $product->name ?> <i><?php echo $product->manufacturer?></i>
4     <b>$<?php echo $product->price?></b>
5 </li>

As we have an image as the first child of the li elements, it is automatically displayed as an 80px thumbnail by jQuery mobile.

One of the advantages to using the interface components defined in the library is that they are automatically scaled to the width of the device. But what about the columns we defined above? We will need to style them ourselves with some CSS3 magic:

assets/css/styles.css

01 media all and (min-width650px){
02
03     .rightColumn{
04         width:56%;
05         float:right;
06         margin-left:4%;
07     }
08
09     .leftColumn{
10         width:40%;
11         float:left;
12     }
13
14 }
15
16 .product i{
17     display:block;
18     font-size:0.8em;
19     font-weight:normal;
20     font-style:normal;
21 }
22
23 .product img{
24     margin:10px;
25 }
26
27 .product b{
28     positionabsolute;
29     right15px;
30     top15px;
31     font-size0.9em;
32 }
33
34 .product{
35     height80px;
36 }

Using a media query, we tell the browser that if the view area is wider than 650px, it should display the columns side by side. If it is not (or if the browser does not support media queries) they will be displayed one on top of the other, the regular “block” behavior.

Reference : http://tutorialzine.com/2011/08/jquery-mobile-mvc-website-part-2/

Building a Website with PHP, MySQL and jQuery Mobile, Part 1

In this two-part tutorial, we will be building a simple website with PHP and MySQL, using the Model-View-Controller (MVC) pattern. Finally, with the help of the jQuery Mobile framework, we will turn it into a touch-friendly mobile website, that works on any device and screen size.

In this first part, we concentrate on the backend, discussing the database and MVC organization. In part two, we are writing the views and integrating jQuery Mobile.

The File Structure

As we will be implementing the MVC pattern (in effect writing a simple micro-framework), it is natural to split our site structure into different folders for the models, views and controllers. Don’t let the number of files scare you – although we are using a lot of files, the code is concise and easy to follow.

The Directory Structure

The Directory Structure

The Database Schema

Our simple application operates with two types of resources – categories and products. These are given their own tables – jqm_categories, and jqm_products. Each product has a category field, which assigns it to a category.

jqm_categories Table Structure

jqm_categories Table Structure

The categories table has an ID field, a name and a contains column, which shows how many products there are in each category.


jqm_products Table Structure

jqm_products Table Structure

The product table has a namemanufacturerprice and a category field. The latter holds the ID of the category the product is added to.

You can find the SQL code to create these tables in tables.sql in the download archive. Execute it in the SQL tab of phpMyAdmin to have a working copy of this database. Remember to also fill in your MySQL login details in config.php.

The Models

The models in our application will handle the communication with the database. We have two types of resources in our application – products and categories. The models will expose an easy to use method – find() which will query the database behind the scenes and return an array with objects.

Before starting work on the models, we will need to establish a database connection. I am using the PHP PDO class, which means that it would be easy to use a different database than MySQL, if you need to.

includes/connect.php

01
02     This file creates a new MySQL connection using the PDO class.
03     The login details are taken from includes/config.php.
04 */
05
06 try {
07     $db new PDO(
08         "mysql:host=$db_host;dbname=$db_name;charset=UTF-8",
09         $db_user,
10         $db_pass
11     );
12
13     $db->query("SET NAMES 'utf8'");
14     $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
15 }
16 catch(PDOException $e) {
17     error_log($e->getMessage());
18     die("A database error was encountered");
19 }

This will put the $db connection object in the global scope, which we will use in our models. You can see them below.

includes/models/category.model.php

01 class Category{
02
03     /*
04         The find static method selects categories
05         from the database and returns them as
06         an array of Category objects.
07     */
08
09     public static function find($arr array()){
10         global $db;
11
12         if(empty($arr)){
13             $st $db->prepare("SELECT * FROM jqm_categories");
14         }
15         else if($arr['id']){
16             $st $db->prepare("SELECT * FROM jqm_categories WHERE id=:id");
17         }
18         else{
19             throw new Exception("Unsupported property!");
20         }
21
22                 // This will execute the query, binding the $arr values as query parameters
23         $st->execute($arr);
24
25         // Returns an array of Category objects:
26         return $st->fetchAll(PDO::FETCH_CLASS, "Category");
27     }
28 }

Both models are simple class definitions with a single static method – find(). In the fragment above, this method takes an optional array as a parameter and executes different queries as prepared statements.

In the return declaration, we are using the fetchAll method passing it the PDO::FETCH_CLASS constant. What this does, is to loop though all the result rows, and create a new object of the Category class. The columns of each row will be added as public properties to the object.

This is also the case with the Product model:

includes/models/product.model.php

01 class Product{
02
03     // The find static method returns an array
04     // with Product objects from the database.
05
06     public static function find($arr){
07         global $db;
08
09         if($arr['id']){
10             $st $db->prepare("SELECT * FROM jqm_products WHERE id=:id");
11         }
12         else if($arr['category']){
13             $st $db->prepare("SELECT * FROM jqm_products WHERE category = :category");
14         }
15         else{
16             throw new Exception("Unsupported property!");
17         }
18
19         $st->execute($arr);
20
21         return $st->fetchAll(PDO::FETCH_CLASS, "Product");
22     }
23 }

The return values of both find methods are arrays with instances of the class. We could possibly return an array of generic objects (or an array of arrays) in the find method, but creating specific instances will allow us to automatically style each object using the appropriate template in the views folder (the ones that start with an underscore). We will talk again about this in the next part of the tutorial.

There, now that we have our two models, lets move on with the controllers.

Computer Store with PHP, MySQL and jQuery Mobile

Computer Store with PHP, MySQL and jQuery Mobile

The controllers

The controllers use the find() methods of the models to fetch data, and render the appropriate views. We have two controllers in our application – one for the home page, and another one for the category pages.

includes/controllers/home.controller.php

01 /* This controller renders the home page */
02
03 class HomeController{
04     public function handleRequest(){
05
06         // Select all the categories:
07         $content = Category::find();
08
09         render('home',array(
10             'title'     => 'Welcome to our computer store',
11             'content'   => $content
12         ));
13     }
14 }

Each controller defines a handleRequest() method. This method is called when a specific URL is visited. We will return to this in a second, when we discuss index.php.

In the case with the HomeControllerhandleRequest() just selects all the categories using the model’s find() method, and renders the home view (includes/views/home.php) using our render helper function (includes/helpers.php), passing a title and the selected categories. Things are a bit more complex inCategoryController:

includes/controllers/category.controller.php

01 /* This controller renders the category pages */
02
03 class CategoryController{
04     public function handleRequest(){
05         $cat = Category::find(array('id'=>$_GET['category']));
06
07         if(empty($cat)){
08             throw new Exception("There is no such category!");
09         }
10
11         // Fetch all the categories:
12         $categories = Category::find();
13
14         // Fetch all the products in this category:
15         $products = Product::find(array('category'=>$_GET['category']));
16
17         // $categories and $products are both arrays with objects
18
19         render('category',array(
20             'title'         => 'Browsing '.$cat[0]->name,
21             'categories'    => $categories,
22             'products'      => $products
23         ));
24     }
25 }

The first thing this controller does, is to select the category by id (it is passed as part of the URL). If everything goes to plan, it fetches a list of categories, and a list of products associated with the current one. Finally, the category view is rendered.

Now lets see how all of these work together, by inspecting index.php:

index.php

01 /*
02     This is the index file of our simple website.
03     It routes requests to the appropriate controllers
04 */
05
06 require_once "includes/main.php";
07
08 try {
09
10     if($_GET['category']){
11         $c new CategoryController();
12     }
13     else if(empty($_GET)){
14         $c new HomeController();
15     }
16     else throw new Exception('Wrong page!');
17
18     $c->handleRequest();
19 }
20 catch(Exception $e) {
21     // Display the error page using the "render()" helper function:
22     render('error',array('message'=>$e->getMessage()));
23 }

This is the first file that is called on a new request. Depending on the $_GET parameters, it creates a new controller object and executes its handleRequest() method. If something goes wrong anywhere in the application, an exception will be generated which will find its way to the catch clause, and then in the error template.

One more thing that is worth noting, is the very first line of this file, where we require main.php. You can see it below:

main.php

01 /*
02     This is the main include file.
03     It is only used in index.php and keeps it much cleaner.
04 */
05
06 require_once "includes/config.php";
07 require_once "includes/connect.php";
08 require_once "includes/helpers.php";
09 require_once "includes/models/product.model.php";
10 require_once "includes/models/category.model.php";
11 require_once "includes/controllers/home.controller.php";
12 require_once "includes/controllers/category.controller.php";
13
14 // This will allow the browser to cache the pages of the store.
15
16 header('Cache-Control: max-age=3600, public');
17 header('Pragma: cache');
18 header("Last-Modified: ".gmdate("D, d M Y H:i:s",time())." GMT");
19 header("Expires: ".gmdate("D, d M Y H:i:s",time()+3600)." GMT");

This file holds the require_once declarations for all the models, controllers and helper files. It also defines a few headers to enable caching in the browser (PHP disables caching by default), which speeds up the performance of the jQuery mobile framework.

Continue to Part 2

With this the first part of the tutorial is complete! Continue to part 2, where we will be writing the views and incorporate jQuery Mobile. Feel free to share your thoughts and suggestions in the comment section below.

Reference : http://tutorialzine.com/2011/08/jquery-mobile-product-website/