Google Chrome issue with SSRS

SSRS reports are not supported for Google Chrome browser. ( ) .   Even firefox has issues while rendering ssrs report iFrames but this workaround seems working for firefix ( .

The problem with google chrome is a known issue if we use report manager . The ideal solution is to use report viewer (http://localhost/reportserver ) instead of report manager (http://localhost/reports).

Report Manager



Report Viewer




Power View–SQL Server 2012(code named Denali)

Power View which was previously named as Project Crecent is an SQL Server 2012 reporting service add-in for sharepoint 2010 and excel power pivot for sql server 2012.

Power View (RDLX) and Power Pivot (XLSX)  samples can be downloaded at

ScreenShot_2012-04-13_13-35-00Power view is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. They can easily create and interact with views of data from data models based on PowerPivot workbooks published in a PowerPivot Gallery, or tabular models deployed to SSAS instances. Power View is a browser-based Silverlight application launched from SharePoint Server 2010 that enables users to present and share insights with others in their organization through interactive presentations.




  • Presentation Ready :  A fine look and file for presentation. A WYSWYG , so it works with real data and no need top preview it to see how it looks. There are reading view and full screen view. Also an interactive power view can be exported to PPT slide. Power view can also be published to share point , so that user can view and interact with them.
  • Data Model based : It is a thin web client which is downloaded through the browser from a sharepoint 2010 data model( a power pivot model workbook or a tabular model running on an ssas.
  • Visual Design experiences : There is no separtion of design time and run time. we can switch between views and change the perspective as simple as working with an excel sheet with its ribbon controls
  • Creating Data Visualizations : A wide variety of visualizations available like tables, matrices, charts, graphs, bubble charts etc., and we can switch between those very quickly and easily.
  • Highlighting and filtering data:  Different types of filters are available. A filter can be applied to a visualization or globaly. We can also highlight the data using the filters in the visualisations
  • Sort : Sort is basically applied for anything
  • Reports with multiple views : A report can have multiple views with different visualisations and different filters.
  • Performance : it fetches data only that is needed for a particular visualisation. This is beneficial for visualisation based from millions of records.


What is Domino?

Domino is the name of the applications and messaging server program for the Lotus Corporation‘s Lotus Notes product, a sophisticated groupware application that is installed in many corporations. Notes lets a corporation and its workers develop communications- anddatabase-oriented applications so that users at different geographic locations can share files with each other, comment on them publicly or privately (to groups with special access), keep track of development schedules, work projects, guidelines and procedures, plans, white papers, and many other documents, including multimedia files. Lotus uses the Domino name to refer to a set of Notes server applications. Notes itself refers to the overall product.

 Ask your Lotus Domino questions at

The Notes and Domino servers interact with other Notes/Domino servers in a distributednetwork. As changes are made to a database at one server, updates are continually forwarded to replicated copies of these databases at the other servers so that users are always looking at the same information. In general, Notes follows the client/server model. The replication updates are made using Remote Procedure Call (RPC) requests. Notes can be coordinated with Web servers and applications on a company’s intranet.
Reference :

How to Drop UNDO Tablespace?

It is not an easy task to drop the undo tablespace . Once I have to delete the undo tablespace due to some reason and i  find that it is not straight forward to delete the undo tablespace . I got the following error while dropping the error :

SQL> select tablespace_name,file_name from dba_data_files;

—————————— ———————————————————————
USERS                               D:\ORACLE\ORADATA\NOIDA\USERS01.DBF
SYSAUX                             D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF
SYSTEM                            D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF

SQL> drop tablespace undotbs1;
drop tablespace undotbs1
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

As the error indicate that the undo tablespace is in use so i issue the following command.

SQL> alter tablespace undotbs1  offline;
alter tablespace undotbs1  offline
ERROR at line 1:
ORA-30042: Cannot offline the undo tablespace.

Therefore, to drop undo  tablespace, we have to perform following steps:

1.) Create new undo tablespace
2.) Make it defalut tablepsace and undo management manual by editting parameter file and restart it.
3.) Check the all segment of old undo tablespace to be offline.
4.) Drop the old tablespace.
 5.) Change undo management to auto by editting parameter file and restart the database

Step 1 : Create Tablespace   :  Create undo tablespace undotbs2    

SQL> create undo tablespace UNDOTBS2 datafile  ‘D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF’  size 100M;
Tablespace created.

Step 2 : Edit the parameter file

SQL> alter system set undo_tablespace=UNDOTBS2 ;
System altered.

SQL> alter system set undo_management=MANUAL scope=spfile;
System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Total System Global Area  426852352 bytes
Fixed Size                  1333648 bytes
Variable Size             360711792 bytes
Database Buffers           58720256 bytes
Redo Buffers                6086656 bytes
Database mounted.
Database opened.
SQL> show parameter undo_tablespace
NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_tablespace                      string      UNDOTBS2

Step 3: Check the all segment of old undo tablespace to be offline

SQL> select owner, segment_name, tablespace_name, status from dba_rollback_segs order by 3;

OWNER  SEGMENT_NAME                   TABLESPACE_NAME                STATUS
—— —————————— —————————— —————-
SYS                 SYSTEM                                     SYSTEM                            ONLINE
PUBLIC       _SYSSMU10_1192467665$          UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU1_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU2_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU3_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU4_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU5_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU6_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU7_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU8_1192467665$           UNDOTBS1                       OFFLINE
PUBLIC        _SYSSMU9_1192467665$           UNDOTBS1                       ONLINE
PUBLIC      _SYSSMU12_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU13_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU14_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU15_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU11_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU17_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU18_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU19_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU20_1304934663$          UNDOTBS2                        OFFLINE
PUBLIC      _SYSSMU16_1304934663$          UNDOTBS2                        OFFLINE

21 rows selected.

If any one the above segment is online then change it status to offline by using below command .

SQL>alter rollback segment “_SYSSMU9_1192467665$” offline;

Step 4 : Drop old undo tablespace

SQL> drop tablespace UNDOTBS1 including contents and datafiles;
Tablespace dropped.

Step  5 : Change undo management to auto and restart the database

SQL> alter system set undo_management=auto scope=spfile;
System altered.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  426852352 bytes
Fixed Size                  1333648 bytes
Variable Size             364906096 bytes
Database Buffers           54525952 bytes
Redo Buffers                6086656 bytes
Database mounted.
Database opened.

SQL> show parameter undo_tablespace
NAME                                       TYPE        VALUE
————————————   ———– ——————————
undo_tablespace                      string      UNDOTBS2



How to build a FREE Private Cloud using Microsoft Technologies….

This document will guide you through the process of setting up the bare minimum components to demo a Private Cloud environment using current release versions of Microsoft products and technologies. It is NOT meant for nor is it an ideal configuration for use in a production environment. If you have a Technet or MSDN subscription then you have all the software you need already. Otherwise you can download FREE TRIAL versions of all the necessary components from the Microsoft Technet Evaluation Center.

Once the installation and configuration are complete, you will be able to demo the use of System Center Virtual Machine Manager and the SCVMM Self Service Portal 2.0 to build and manage a Private Cloud. With additional software and hardware resources, this configuration can be expanded to include additional System Center Technologies to demonstrate a much broader Private Cloud implementation including monitoring, reporting, change management, deployment and more. There are free trial versions of all the System Center products at the Microsoft Technet Evaluation Center.

There is an assumption that you have at least a basic knowledge of the roles and services in Windows 2008 R2, a cursory knowledge of how to install SQL Server 2008 R2, and a basic understanding of how the System Center Virtual Machine Manager works. Additional documents and walkthroughs may be produced for more detail. If there is something you would like to have more information on, please comment to this blog post and let me know.

If you plan on doing this in a single sitting, bring plenty of your favorite caffeinated beverage, some good music to listen to, maybe even a good book, and lot of patience. There is a lot of “hurry up and wait” that takes place during this setup. Expect to spend 6-10 hours depending on how fast your hardware is and how efficient you are. This guide could be condensed even further to combine certain steps and reduce setup time slightly but I have opted to make it as fool proof as possible. If you follow this guide exactly, you should not see any errors or failures during the installation.

The resultant demo configuration does not provide for any failover or redundancy and is intended solely as a lightweight demo/test/learning environment. The concepts here can be used as a template to install a production Private Cloud, but please, do not implement this configuration in production without speaking to the appropriate persons that administer your network. If you implement this in production, you do so at your own risk and you should have an updated resume available.

Host Machine – Windows Server 2008 R2 + SP1 + all post SP1 Updates

Roles: Active Directory Domain Services, DNS Server, Hyper-V, Web Server (IIS)

Software: SQL Server 2008 R2 x64, System Center Virtual Machine Manager 2008 R2 Server Components and Administrator Console, SCVMM Self Service Portal 2.0

Guest VM’s – Once this install is complete, you can create whatever guest VM’s you like to use for testing and demoing. In a future document I will detail a list of resources you may wish to create so you have a relevant test and demo environment.

Hardware Requirements:

I personally recommend using a desktop computer because of the drive options available. However, a high-end laptop can be used. I have performed this install to both hardware platforms in the following configurations:

Laptop: Lenovo W510 (quad processor + hyper-threading), 16gigs RAM, (1) 7200rpm SATA drive for host operating system, (1) 140gig Solid State Drive for guest VM storage

***This is the platform I used when creating this document***

Pros: Compact, very portable

Cons: Disk I/O and potential CPU bottlenecks decreases performance. This can be mitigated by investing is higher end disk drive and/or a laptop with greater processing capabilities but increases the cost dramatically. Overall a more expensive solution even with lower end components.

Desktop: Quad-processor CPU, 16gigs RAM, (1) 7200rpm for host operating system, (2 or more ) 7200rpm+ SATA drives for guest VM storage (these drives can be striped as RAID-0 for additional performance *or* they can be formatted independently and place guest VM’s on separate spindles. For my desktop implementation at home I am using the RAID-0 option)

Pros: Better performance due to disk drive configuration options. Lower cost of desktop PC components make this a less expensive solution even with higher end hardware.

Cons: More of a fixed solution, less portable. Could potentially use an ultra-mini case or small “media center” type case to increase portability, however, desk top components are not designed to be moved around a lot so you are at a higher risk of component failure.

I also *highly recommend* a high capacity dedicated external storage device for backup up configurations along the way. The entirety of this private cloud configuration is relatively simple but the overall process is time consuming. The more frequently you backup/snapshot at key stages the less likely you will be to spend rebuilding from scratch.

Software Requirements:

If you have a Technet or MSDN subscription you have everything you need. If you do not have a Technet or MSDN subscription you can use free trial software for everything. Just be mindful of the individual timebombs and make note of when things expire. Using the pieces below you should be able to run for 180 days from the day the Host machine OS is installed.

Windows Server 2008 R2 with SP1 Trial

System Center Virtual Machine Manager 2008 R2 with SP1 Trial

Microsoft SQL Server 2008 R2 Trial (get the 64bit version)

Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 with SP1

Suggested Pre-Reading/Learning:

An assumption is being made that you are familiar with installing and configuring Windows Server 2008 R2 and its related Roles and Features. If not, then you should bookmark and leverage the following –

Microsoft Technet Windows Server TechCenter

Additional Resources:

Microsoft SQL Server 2008 R2 TechCenter

System Center Virtual Machine Manager 2008 R2 TechCenter

System Center Virtual Machine Manager Self-Service Portal 2.0 TechCenter

The Heavy Lifting – Installing the components

This section of the guide will walk you through the installation of each and every piece of the Microsoft Private Cloud solution. I have chosen an abbreviated rapid fire approach to this install. There are no screen shots. I do not go into detail around the choices made on the selection screens. If the options on a screen are not discussed in the document, you can assume the default selections will suffice.

There is a lot of opportunity to customize things along the way. There is a lot of opportunity to poke around and make changes during setup or while waiting on files to copy. I recommend that you NOT do this if you can avoid it. This document should provide a 100% success rate with ZERO errors during install if you follow it exactly. If you choose to stray and make changes during the install, you do so at the risk of your own time invested in this process.

Grab that caffeinated beverage. Take a big sip. Start your music. Take a deep breath. Here we go….

Install the Hyper-V Host

Windows Server 2008 R2 is the foundation up which we build the entire private cloud. The leverage the built-in Hyper-V hypervisor for virtualizing the servers, clients and their applications that can then be served up through the self-service portal. It is absolutely essential that the base server is installed properly and is 100% stable.

Pre-install hardware configuration – Ensure that you have enabled virtualization support in the BIOS of your computer. How this is managed/enabled depends on the PC Manufacturer and the BIOS used. You should also make sure the Data Execution Prevention (DEP) is active. There is a great blog post that talks about how to do this here —

*I recommend rebooting after each line item below*

Install Windows 2008 R2

Install any BIOS updates/hardware drivers/manufacturer updates for your system

Install SP1 (can be skipped if you installed Windows 2008 R2 + SP1 integrated)

Install all post-SP1 updates from Windows Update

*after each update install completes, reboot and run

Windows Update until no further updates are offered*

Optional – Rename host to desired friendly name

Install Necessary Windows Server Roles and Features

Add the Role: Active Directory Domain Services

Run the Active Directory Domain Services installation wizard (dcpromo.exe)

Create a new domain in a new forest

Supply FQDN of the new forest root domain (ie; privatecloud.local)

Supply Domain NetBIOS name (ie; PRIVATECLOUD)

Select Forest Functional Level (Windows 2003 is fine)

Select Domain Functional Level (Windows 2003 is fine)

Allow DNS to be installed (Assign Static IP if necessary)

(***I assigned a static IP address/mask for my local subnet and pointed to my default gateway. I then configured DNS with forwarders of and – These are AT&T’s public DNS servers. This allows for Internet access to download Windows Updates or other software needed***)

Location for Database, Log Files, SYSVOL = Default

Assign Password for Directory Services Restore Mode

Complete Wizard and Reboot

Add the Role: Hyper-V

Create Virtual Network: Attach to local Ethernet

Complete Wizard and Reboot

Allow Wizard to Complete and Reboot

Install Web Server (IIS) Role

IIS is required by the Self Service Portal 2.0. The portal also requires specific Web Server (IIS) role services and the Message Queueing Feature to be enabled.

Add the Role: (Web Server IIS) – Next

Role Services – Select:

Static Content

Default Document


.NET Extensibility

ISAPI Extensions

ISAPI Filters

Request Filtering

Windows Authentication

IIS6 Metabase Compatibility

Confirmation – Install

Add the Feature: Message Queueing – Next

Confirmation – Install

Windows Server 2008 R2 Foundation is now complete!

The Windows Server 2008 R2 + Hyper-V host is now complete. There are a few (not really) optional steps below you may wish to take just for your own sanity.

Optional (recommended) – Install Windows Server Backup Features

Optional (recommended) – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice)

Install SQL Server 2008 R2

SQL Server 2008 R2 is used for storing configuration information for System Center Virtual Machine Manager and the SCVMM Self-Service Portal. You do not need to be a SQL guru to get things up and running or even for day to day operations. You can pretty much forget about SQL except for routine patching. The exception to this (there are always exceptions) is if you use this document to implement a Private Cloud in a production environment using an existing production SQL Server. In that case, I beg you to speak to your SQL Admin *BEFORE* doing anything with SQL. You have been warned.

Launch SQL setup

New Installation or add features to an existing installation

Enter Product key or Specify a free edition

Accept License

Setup Support Files – Install

Setup Support Rules – Address any issues – Next

SQL Server Feature Installation – Next

Feature Selection – Select

Database Engine Services

Management Tools Basic

Default paths – Next

Installation Rules – Next

Default Instance (MSSQLSERVER) – Next

Disk Space Requirements – Next

Use the same account for all SQL server services

(if this host will be connecting to a network or the Internet then I suggest following SQL security guidelines and create unique accounts for each service. If you will only be using this for non-Network connected demonstrations, you can use the domainname\Administrator account for simplicity)

Supply credentials – Next

Windows authentication mode – Add current user – domainname\Administrator – Next

Error Reporting – Your choice – Next

Installation Configuration Rules – Next

Ready to Install – Summary – Install

Complete – Close

Windows Update – Check for Updates – Install – Reboot

(This one takes quite a while. Go get something to eat.)

Install System Center Virtual Machine Manager R2 + SP1

VMM Server Component

Start SCVMM Setup – Setup – VMM Server

Accept License – Next

CEIP – Your choice – Next

Product Registration – Fill in – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

SQL Server Settings – Use a supported version of SQL Server:

Server name: <name of localhost>

Check – Use the following credentials:

User name: <domain>\Administrator

Password: <password>

Select or enter a SQL instance: Drop down to MSSQLSERVER

Select or enter a database:  <enter a database name; ie; SCVMMDB>

Check – Create a new database

Library Share Settings

Create a new library share – Defaults are fine – Next

Installation Settings

Ports – Defaults are fine

VMM Server Account – Local System is fine – Next

Summary of Settings – Install

Install the VMM Administrator Console

Once the Virtual Machine Manager Administrator Console is installed, this will become the primary interface used when dealing with your virtualization infrastructure. There will be times you will want or need to go back to the standard Hyper-V MMC, but you should get comfortable with the SCVMM Administrator console for day-to-day operations.

Start SCVMM Setup – Setup – VMM Administrator Console

Accept License – Next

CEIP – Your choice – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

Port – 8100 – Default is fine

Summary of Settings – Install

Windows Update – Check for Updates – Install – Reboot

Take a deep breath. Switch from caffeine to ….something more calming. You are almost done.


Install the SCVMM Self-Service Portal 2.0 with SP1

***Note – You probably noticed an option to install a Self Service Portal from with the SCVMM Setup interface.DO NOT INSTALL THIS VERSION. It is an older version and does not provide the most current functionality. Download the SSP 2.0 + SP1 version from the link in the “Software Requirements” section of this document.***

The Self-Service Portal is one of the defining features of the Microsoft Private Cloud. Through this portal, administrators can create resource pools consisting of networks, storage, load balancers, virtual machine templates and domains. Administrators can then create and manage business units who can use the self-service portal to requests these pools of resources and create them on demand.

Start SSP2.0 Setup

Getting Starter – (License page) – Accept – Next


VMMSSP Server Component

VMMSSP Website Component


Prerequisite Page – Should be all green – Next

VMMSSP Files – Default is fine – Next

Database Server: <localhost name>

Click – Get Instances

SQL Server Instance: Default

Credentials: Connect using Windows Authentication

Create a new Database or…..: Create a new database


Provide an account for the server component

User Name: Administrator

Password: <password>

Domain: <domainname>

Test Account – Next

Configure WCF Endpoints – Defaults are fine – Next

Provide a list of Database Administrators



Configure the IIS web site for the VMMSSP website component

IIS website name:  VMMSSP <default>

Port Number:  81  <you cannot use 80 since it is assigned to the default web site>

Application pool name:  VMMSSPAppPool  <default>

User Name:  Administrator

Password :  <password>

Domain:  <domainname>


Installation Summary – Install – Yes to Dialog

Close the final window.

Windows Update – Check for Updates – Install – Reboot

Once logged in:

Delete any setup files or unnecessary files/data you will not use for demonstration purposes

Empty the Recycle Bin

NOT OPTIONAL – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice). Trust me. At this point you have 6-10 hours invested in this setup and you do NOT want to have to start over.

You now have the hardware and software in place to demo a private cloud!

However, a Private Cloud is more about the HOW you use the infrastructure to create value, provide self-service, reduce overheard, automate resource creation and ultimately – reduce costs.

In the next document I produce, I will define a list of resources to create using the Hyper-V MMC, System Center Virtual Machine Manager, and the SCVMM Self-Service portal. I will then do a few recorded demos with these resources that you can customize for your own demonstration purposes.

Call To Action

Download a hard copy of this document for your own reference –

Bookmark my blog and watch for more posts and screen casts on Private Cloud. Here are some of the Planned Posts/Content/Screencasts I am working on:

Configuring Basic Resources for use in a Private Cloud

Creating virtual hard disks

Creating virtual machines

Creating templates in SCVMM

Creating Hardware and OS profiles in SCVMM

Configuring and using the Self-Service Portal 2.0

Initial Configuration

Creating and managing Infrastructures

Working with Virtual Machines

Managing User Roles and Business Units

Walking through the Request process

If there is a particular feature or process you would like to know more about, please contact me through a comment to this post or in email and we will discuss getting it produced.

For now, have fun playing with your new Private Cloud! (AFTER that bare metal recovery backup!)


Reference :

How to build a private cloud

If you’re nervous about running your business applications on a public cloud, many experts recommend that you take a spin around a private cloud first.

Cloud: Ready or not?

But building and managing a cloud within your data center is not just another infrastructure project, says Joe Tobolski, director of cloud computing at Accenture.

“A number of technology companies are portraying this as something you can go out and buy – sprinkle a little cloud-ulator powder on your data center and you have an internal cloud,” he says. “That couldn’t be further from the truth.”

An internal, on-premise private cloud is what leading IT organizations have been working toward for years. It begins with data center consolidation, rationalization of OS, hardware and software platforms, and virtualization up and down the stack – servers, storage and network, Tobolski says.

Elasticity and pay-as-you-go pricing are guiding principles, which imply standardization, automation and commoditization of IT, he adds.

And it goes way beyond about infrastructure and provisioning resources, Tobolski adds. “It’s about the application build and the user’s experience with IT, too.”

Despite all the hype, we’re at a very early stage when it comes to internal clouds. According to Forrester Research, only 5% of large enterprises globally are even capable of running an internal cloud, with maybe half of those actually having one, says James Staten, principal analyst with the firm.

But if you’re interested in exploring private cloud computing, here’s what you need to know.

managing cloud computing

First steps: Standardization, automation, shared resources

Forrester’s three tenets for building an internal cloud are similar to Accenture’s precepts for next-generation IT.

To build an on-premises cloud, you must have standardized – and documented — procedures for operating, deploying and maintaining that cloud environment, Staten says.

Most enterprises are not nearly standardized enough, although companies moving down the IT Information Library (ITIL) path for IT service management are closer to this objective than others, he adds.

Standardized operating procedures that allow efficiency and consistency are critical for the next foundational layer, which is automation. “You have to be trusting of and a big-time user of automation technology,” Staten says. “That’s usually a big hurdle for most companies.”

Automating deployment is probably the best place to start because that enables self-service capabilities. And for a private cloud, this isn’t Amazon-style in which any developer can deploy virtual machines (VM) at will. “That’s chaos in a corporation and completely unrealistic,” Staten says.

Rather, for a private cloud, self-service means that an enterprise has established an automated workflow whereby resource requests go through an approvals process.

Once approved, the cloud platform automatically deploys the specified environment. More often, private cloud self-service is about developers asking for “three VMs of this size, a storage volume of this size and this much bandwidth,” Staten says. Self-service for end users seeking resources from the internal company cloud would be “I need a SharePoint volume or a file share.”

Thirdly, building an internal cloud means sharing resources – “and that usually knocks the rest of the companies off the list,” he says.

This is not about technology. “It’s organizational — marketing doesn’t want to share servers with HR, and finance won’t share with anybody. When you’re of that mindset, it’s hard to operate a cloud. Clouds are highly inefficient when resources aren’t shared,” Staten says.

Faced with that challenge, IT Director Marcos Athanasoulis has come up with a creative way to get participants comfortable with the idea of sharing resources on the Linux-based cloud infrastructure he oversees at Harvard Medical School (HMS) in Boston. It’s a contributed hardware approach, he says.

At HMS, which Athanasoulis calls the land of 1,000 CIOs, IT faces a bit of a unique challenge. It doesn’t have the authority to tell a lab what technology to use. It has some constraints in place, but if a lab wants to deploy its own infrastructure, it can. So when HMS approached the cloud concept four years ago, it did so wanting “a model where we could have capacity available in a shared way that the school paid for and subsidized so that folks with small needs could come in and get what they needed to get their research done but also be attractive to those labs that would have wanted to build their own high-performance computing or cloud environments if we didn’t offer a suitable alternative.”

With this approach, if a lab bought 100 nodes in the cloud, it got guaranteed access to that capacity. But if that capacity was idle, others’ workloads could run on it, Athanasoulis says.

“We told them – you own this hardware but if you let us integrate into the cloud, we’ll manage it for you and keep it updated and patched. But if you don’t like how this cloud is working, you can take it away.” He adds, “That turned out to be a good selling point, and not once [in four years] has anybody left the cloud.”

To support the contributed hardware approach, HMS uses Platform Computing’s Platform LSF workload automation software, Athanasoulis says. “The tool gives us the ability to set up queues and suspend jobs that are on the contributed hardware nodes, so that the people who own the hardware get guaranteed access and that suspended jobs get restored.”

Don’t proceed until you understand your services

If clouds are inefficient when resources aren’t shared, they can be outright pointless if services aren’t considered before all else. IBM, for example, begins every potential cloud engagement with an assessment of the different types of workloads and the risk, benefit and cost of moving each to different cloud models, says Fausto Bernardini, director IT strategy and architecture, cloud portfolio services, at IBM.

Whether a workload has affinity with a private, public or hybrid model depends on a number of attributes, including such key ones as compliance and security but others, too, such as latency and interdependencies of components in applications, he says.

Many enterprises think about building a private cloud from a product perspective before they consider services and service requirements – and that’s the exact opposite of where to start, says Tom Bittman, vice president and distinguished analyst at Gartner.

“If you’re really going to build a private cloud, you need to know what your services are, and what the [service-level agreements], costs and road maps are for each of those. This is really about understanding whether the services are going toward the cloud computing style or not,” he says.

Common services with relatively static interfaces, even if your business is highly reliant on them, are those you should be considering for cloud-style computing, private or public, Bittman says. E-mail is one example.

“I may use it a lot, but it’s not intertwined with the inner workings of my company. It’s the kind of service moving in the direction of interface and independence – I don’t want it to be integrated tightly with the company. I want to make it as separate as possible, easy to use, available from self-service interface,” Bittman says. “And if I’ve customized this type of service over time, I’ve got to undo that and make it as standard as possible.”

Conversely, services that define a business and are constantly the focus of innovative initiatives are not cloud contenders, Bittman says. “The goal for these services is intimacy and integration, and they are never going to the cloud. They may use cloud functions at a low level, like for raw compute, but the interface to the company isn’t going to be a cloud model.”

Only once you understand which services are right for the cloud and how long it might take you to get them to a public-readiness state will you be ready to build a business case and start to look at building a private cloud from a technology perspective, he says.

The final tiers: Service management and access management

Toward that end, Gartner has defined four tiers of components for building a private cloud.

At the bottom sits the resource tier comprising infrastructure, platforms or software. Raw virtualization comes to mind immediately, but VMs aren’t the only option – as long as you’ve got a mechanism for turning resources into a pool you’re on the way, Bittman says. Rapid re-provisioning technology is another option, for example.

Above the resource pool sits the resource management tier. “This is where I manage that pool in an automated manner,” says Bittman, noting that for VMware environments, this is about using VMware Distributed Resource Scheduler.

“These two levels are fairly mature,” Bittman says. “You can find products for these available in the market, although there’s not a lot of competition yet at the resource management tier.”

Next comes the service management tier. “This is where there’s more magic required,” he says. “I need something that lets me do service governance, something that lets me convert pools of resources into service levels. In the end, I need to be able to present to the user some kind of service-level interface that says ‘performance’ or ‘availability’ and have this services management tier for delivering on that.”

As you think about building your private cloud, understand that the gap between need and product availability is pretty big, Bittman says. “VMware, for example, does a really good job of allowing you to manage your virtualization pool, but it doesn’t know anything about services.VMware’s vCenter AppSpeed is one early attempt to get started on this,” he adds.

“What we really need is a good service governor, and that doesn’t exist yet,” says Bittman.

Sitting atop it all is the access management tier, which is all about the user self-service interface. “It presents a service catalog, and gives users all the knobs to turn and lets you manage subscribers,” Bittman says. “The interface has to be tied in some way to costing and chargeback, or at least metering – it ties to the service management tier at that level.”

Chargeback is a particularly thorny challenge for private cloud builders, but one that they can’t ignore for long. “It’s tricky from a technology perspective — what do I charge based on? But also from political and cultural perspectives,” Bittman says. “But frankly, if I’m going to move to cloud computing I’m going to move to a chargeback model so that’s going to be one of the barriers that needs to be broken anyways.”

In the end, it’s about the business

And while cloud-builders need to think in terms of elasticity, automation, self-service and chargeback, they shouldn’t be too rigid about the distinctions at this stage of cloud’s evolution, Bittman says. “We will see a lot of organizations doing pure cloud and a lot doing pure non-cloud, and a whole lot of stuff somewhere in the middle. What it all really comes down to is, ‘Is there benefit?'”

Wentworth-Douglass Hospital, in Dover, N.H., for example, is building what it calls a private cloud using a vBlock system from Cisco, EMC and VMware. But it’s doing so more with an eye toward abstraction of servers and not so much on the idea of self-provisioning or software-as-a-service (SaaS), says Scott Heffner, network operations manager for the hospital.

“Maybe we’ll get to SaaS eventually, and we are doing as much automation as we can, but I’m introducing concepts slowly to the organization because the cloud model is so advanced that to get the whole organization to conceive of and understand it right off the bat is too much,” he says

Reference :


In general, CUBRID is a comprehensive open source relational database management system highly optimized for Web Applications, especially when complex web services process large amount of data and generate huge concurrent requests.

More specifically, CUBRID is implemented in C programming language. It is scalable and is a high performance database system almost fully compatible with MySQL. CUBRID is a system with a unique architecture and rich functionality. Its High-Availability feature, sync/async/semi-sync replication, online and incremental backup, and many other enterprise level features makes CUBRID a reliable solution ideal for Web services. By providing unique optimized features, CUBRID enables to process much more parallel requests at much less response time.


CUBRID has been developed since 2006, and today it is becoming very popular because of its clean and highly optimized for Web Applications Software Architecture, as well as rich database functionality. Its code base has undergone complete optimization and intensive quality assurance. CUBRID is being used by many SME companies and large organizations, the latter having a farm of over 10,000 data servers. (See Who else uses CUBRID?)

CUBRID, unlike other database systems, does not have an Enterprise version of its DBMS. It do not distinguish its license policy between Community and Enterprise. There is only one version of CUBRID DBMS, which follows General Public License version 2 or higher. This CUBRID Open Source License Policy is extremely beneficial for companies dealing with client applications development. They do not have to purchase any Enterprise License or share their income. This provides the organizations the significant cost savings opportunity over the alternative database management solutions. (See complete article on CUBRID Open Source License Policy.)

Total Cost of Ownership (TCO) for a CUBRID based database solution is significantly lower than the alternatives due to hardware costs savings. CUBRID’s high performance, its optimizations and perfect scaling mean that organizations can deploy cheaper hardware and still provide 24/7 up-time service for the same number of concurrent users.

Reference :

Data Capture, Data Segment, and Data Mining

Data Capture

Data input can happen in several ways. One way is as the result of data entry. In data entry, data is placed in chosen fields of a database by a human agent using a device such as a mouse, keypad, keyboard, touch screen, or stylus, or alternatively, with speech recognition software. Data capture is a kind of data input in which there is no data entry. Instead, data is collected in conjunction with a separate activity.

One of the devices involved in data capture includes supermarket checkouts equipped with barcode readers. Barcode readers are electronic devices that use a laser beam to scan a barcode. These readers are categorized as non-contact automatic data capture devices. They need to be within a few inches of the material they are scanning to read it.

Magnetic stripe readers, also called card swipe machines, collect the information stored in the magnetic material that is found on bank, charge, and credit cards. This information often includes an account number, the customer’s own identification number, and other information. ATMs can also read this information. If the magnetic stripe is damaged or exposed to a strong magnetic or electrical field, the information will not be retrievable.

A point-of-sale (POS) terminal, through which credit card transactions are submitted and validated, reads the bank name and customer account number of a card swiped through a magnetic stripe reader. If the bank responds that the funds are available, the POS terminal transfers the approved amount to the account of the seller, finishing the transaction with a printed receipt.

Optical character recognition (OCR) involves the conversion of a digitized image of text created in print or handwritten to characters that are recognizable by word-processing programs. It is also used to preserve documents in an electronic format without having to re-enter data by hand.

Radio frequency identification (RFID) is a data capture technology in which identification of items is done through transponders that are attached to them. A transponder is a type of radio-relay equipment that is passive. Its function is to passively respond with a repetition of the original signal or a coded recognition signal when struck by an initiating signal. RFIDs work from greater distances than barcode readers can, which is one of their values.

Data Segmentation

Data segmentation is the process of taking your data and segmenting it so that you can use it more efficiently within marketing and operations.

Data segmentation will allow you to communicate a relevant and targeted message to each segment identified. By segmenting your data, you will be able to identify different levels of your customer database and allow messaging to be tailored and sophisticated to suit your target market.

Trying to understand the different characteristics of customers and prospects is a common challenge experienced by organisations today. Without this valuable information it is difficult to produce targeted and cost-effective communications. Marketing spend can be wasted on communicating with unprofitable customers and unlikely prospects. Resource can be ploughed into hitting the wrong target markets. Data segmentation can be used to overcome such issues.

Without data segmentation, an organisation can face various problems such as:

  • General lack of knowledge about their customer and prospect base
  • Untargeted communications producing low ROI
  • Poor customer retention
  • Incorrect messaging can lead to customer dissatisfaction

Data Mining

Data mining uses a relatively large amount of computing power operating on a large set of data to determine regularities and connections between data points. Algorithms that employ techniques from statistics, machine learning and pattern recognition are used to search large databases automatically. Data mining is also known as Knowledge-Discovery in Databases (KDD).

Like the term artificial intelligence, data mining is an umbrella term that can be applied to a number of varying activities. In the corporate world, data mining is used most frequently to determine the direction of trends and predict the future. It is employed to build models and decision support systems that give people information they can use. Data mining takes a front-line role in the battle against terrorism. It was supposedly used to determine the leader of the 9/11 attacks.

Data miners are statisticians who use techniques with names like near-neighbor models, k-means clustering, holdout method, k-fold cross validation, the leave-one-out method, and so on. Regression techniques are used to subtract irrelevant patterns, leaving only useful information. The term Bayesian is seen frequently in the field, referring to a class of inference techniques that predict the likelihood of future events by combining prior probabilities and probabilities based on conditional events. Spam filtering is arguably a form of data mining, which automatically brings relevant messages to the surface from a chaotic sea of phishing attempts and Viagra pitches.

Decision trees are used to filter mountains of data. In a decision tree, all data passes through an entrance node, where it faces a filter that separates the data into streams depending on its characteristics. For example, data about consumer behavior is likely to be filtered based on demographic factors. Data mining is not primarily about fancy graphs and visualization techniques, but it does employ them to show what it has found. It is known that we can absorb more statistical information visually than verbally and this format for presentation can be very persuasive and powerful if used in the right context.

As our civilization becomes increasingly data-saturated and sensors are distributed en masse into our local environments, we will inadvertently discover things that might be missed on the first pass over. Data mining will let us correct these mistakes and discover new insights based on past data, giving us more bang for our data storage buck.