Top 5 Tools for network security monitoring

Security data can be found on virtually all systems in a corporate network. However, all systems do not provide equally valuable security context. While monitoring everything would be ideal, this is impractical for most organizations due to resource constraints. So what data sources should you prioritize to make the most of your monitoring efforts?

When it comes to security monitoring, context is the key. The more relevant security context you have, the more likely it is you will successfully detect real security incidents while weeding out false positives (e.g. non-threats). In determining which devices and systems to monitor for security data, the first priority is to give yourself as much useful context as possible.

Based on a decade of monitoring experience, SecureWorks believes the top five sources of security context are:

Number One: Network-based Intrusion Detection and Prevention Systems (NIDS/NIPS)

NIDS and NIPS devices use signatures to detect security events on your network. Performing full packet inspection of network traffic at the perimeter or across key network segments, most NIDS/NIPS devices provide detailed alerts that help to detect:

  • Known vulnerability exploit attempts
  • Known Trojan activity
  • Anomalous behavior (depending on the IDS/IPS)
  • Port and Host scans

Number Two: Firewalls

Serving as the network’s gatekeeper, firewalls allow and log incoming and outgoing network connections based on your policies. Some firewalls also have basic NIDS/NIPS signatures to detect security events. Monitoring firewall logs and alerts helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Worm outbreaks
  • Minor anomalous behavior
  • Most any activity denied by firewall policy

Number Three: Host-based Intrusion Detection and Prevention Systems (HIDS/HIPS)

Like NIDS/NIPS, host-based intrusion detection and prevention systems utilize signatures to detect security events. But instead of inspecting network traffic, HIDS/HIPS agents are installed on servers to directly alert on security activity. Monitoring HIDS/HIPS alerts helps to detect:

  • Known vulnerability exploit attempts
  • Console exploit attempts
  • Exploit attempts performed over encrypted channels
  • Password grinding (manual or automated attempts to guess passwords)
  • Anomalous behavior by users or applications

Number Four: Network Devices with Access Control Lists (ACLs)

Network devices that can use ACLs, such as routers and VPN servers, have the ability to control network traffic based on permitted networks and hosts. Monitoring logs from devices with ACLs helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Minor anomalous behavior
  • Most anything denied by the ACL’s

Number Five: Server and Application Logs

Many types of servers and applications log events such as login attempts and user activity. Depending on the extent of logging capabilities, monitoring server and application logs can help to detect:

  • Known and unknown exploit attempts
  • Password Grinding
  • Anomalous behavior by users or applications

It is important to understand that the incremental value of a data source will vary from situation to situation. A source’s purpose, its location in your network and the quality of the data it provides are a few of the many variables that must be considered when planning your security monitoring strategy.

Keep in mind that there are many other security technologies, network devices and log sources throughout your IT environment that may also provide beneficial context to your security monitoring efforts. For example, Unified Threat Management (UTM) devices which combine firewall, NIDS/NIPS and other capabilities onto a single device can be monitored to detect similar events as standalone firewalls and NIDS/NIPS devices.

By monitoring the assets that provide the highest value security context, you can optimize security monitoring efforts. Doing so will provide faster, more accurate detection of threats while making the most of your security resources. For additional information on monitoring security events and other security topics, please visit theSecureWorks website.

 

Featured Gartner Research:

What Organizations are Spending on IT Security

According to research and advisory firm Gartner Inc., “Many CIOs and chief information security officers (CISOs) are uncertain about what is a ‘normal’ level of security spending in terms of a percentage of the overall IT budget – especially during economic uncertainty.” This research note will help IT managers understand how organizations are investing in their information security and compare their spending with that of their peers.

View the complimentary Gartner report made available to you by SecureWorks.

 

Security 101: Web Application Firewalls

What is a Web Application Firewall?
A web application firewall (WAF) is a tool designed to protect externally-facing web applications used for online banking, Internet retail sales, discussion boards and many other functions from application layer attacks such as cross-site scripting (XSS), cross-site request forgery (XSRF) and SQL injection. Because web application attacks exploit flaws in application logic that is often developed internally, each attack is unique to its target application. This makes it difficult to detect and prevent application layer attacks using existing defenses such as network firewalls and NIDS/NIPS.

How do WAFs Work?
WAFs utilize a set of rules or policies to control communications to and from a web application. These rules are designed to block common application layer attacks. Architecturally, a WAF is deployed in front of an application to intercept communications and enforce policies before they reach the application.

What are the Risks of Deploying a WAF?

Depending on the importance of the web application to your business, the risk of experiencing false positives that interrupt legitimate communications can be a concern. To provide sound protection with minimal false positives, WAF rules and policies must be tailored to the application(s) the WAF is defending. In many cases, this requires significant up-front customization based on in-depth knowledge of the application in question. This effort must also be maintained to address modifications to the application over time.

What are the Benefits of Deploying a WAF?

A WAF can be beneficial in terms of both security and compliance. Applications are a prime target for today’s hackers. Also, the Payment Card Industry (PCI) Data Security Standard requires companies who process, store or transmit payment card data to protect their externally-facing web applications from known attacks (Requirement 6.6). If managed properly and used in conjunction with regular application code reviews, vulnerability testing and remediation, WAFs can be a solid option for protecting against web application attacks and satisfying related compliance requirements.

 

Reference: http://www.secureworks.com/resources/newsletter/2008-07/

NIDS (Network Intrusion Detection System) and NIPS (Network Intrusion Prevention System)

NIDS and NIPS (Behavior based, signature based, anomaly based, heuristic)

An intrusion detection system (IDS) is software that runs on a server or network device to monitor and track network activity. By using an IDS, a network administrator can configure the system to monitor network activity for suspicious behavior that can indicate unauthorized access attempts. IDSs can be configured to evaluate system logs, look at suspicious network activity, and disconnect sessions that appear to violate security settings.

IDSs can be sold with firewalls. Firewalls by themselves will prevent many common attacks, but they don’t usually have the intelligence or the reporting capabilities to monitor the entire network. An IDS, in conjunction with a firewall, allows both a reactive posture with the firewall and a preventive posture with the IDS.

In response to an event, the IDS can react by disabling systems, shutting down ports, ending sessions, deception (redirect to honeypot), and even potentially shutting down your network. A network-based IDS that takes active steps to halt or prevent an intrusion is called a network intrusion prevention system (NIPS). When operating in this mode, they are considered active systems.

Passive detection systems log the event and rely on notifications to alert administrators of an intrusion. Shunning or ignoring an attack is an example of a passive response, where an invalid attack can be safely ignored. A disadvantage of passive systems is the lag between intrusion detection and any remediation steps taken by the administrator.

Intrusion prevention systems (IPS) like IDSs follows the same process of gathering and identifying data and behavior, with the added ability to block (prevent) the activity.

A network-based IDS examines network patters, such as an unusual number or requests destined for a particular server or service, such as an FTP server. Network IDS systems should be located as upfront as possible, e.g. on the firewall, a network tap, span port, or hub, to monitor external traffic. Host IDS systems on the other hand, are placed on individual hosts where they can more efficiently monitor internally generated events.

Using both network and host IDS enhances the security of the environment.

Snort is an example of a network intrusion detection and prevention system. It conducts traffic analysis and packet logging on IP networks. Snort uses a flexible rule-based language to describe traffic that it should collect or pass, and a modular detection engine.

Network based intrusion detection attempts to identify unauthorized, illicit, and anomalous behavior based solely on network traffic. Using the captured data, the Network IDS processes and flags any suspicious traffic. Unlike an intrusion prevention system, an intrusion detection system does not actively block network traffic. The role of a network IDS is passive, only gathering, identifying, logging and alerting.

Host based intrusion detection system (HIDS) attempts to identify unauthorized, illicit, and anomalous behavior on a specific device. HIDS generally involves an agent installed on each system, monitoring and alerting on local OS and application activity. The installed agent uses a combination of signatures, rules, and heuristics to identify unauthorized activity. The role of a host IDS is passive, only gathering, identifying, logging, and alerting. Tripwire is an example of a HIDS.

There are no fully mature open standards for ID at present. The Internet Engineering Task Force (IETF) is the body which develops new Internet standards. They have a working group to develop a common format for IDS alerts.

The following types of monitoring methodologies can be used to detect intrusions and malicious behavior: signature, anomaly, heuristic and rule-based monitoring.

A signature based IDS will monitor packets on the network and compare them against a database of signatures or attributes from known malicious threats. This is similar to the way most antivirus software detects malware. The issue is that there will be a lag between a new threat being discovered in the wild and the signature for detecting that threat being applied to your IDS.

A network IDS signature is a pattern that we want to look for in traffic. Signatures range from very simple – checking the value of a header field – to highly complex signatures that may actually track the state of a connection or perform extensive protocol analysis.

An anomaly-based IDS examines ongoing traffic, activity, transactions, or behavior for anomalies (things outside the norm) on networks or systems that may indicate attack. An IDS which is anomaly based will monitor network traffic and compare it against an established baseline. The baseline will identify what is “normal” for that network, what sort of bandwidth is generally used, what protocols are used, what ports and devices generally connect to each other, and alert the administrator when traffic is detected which is anomalous to the baseline.

A heuristic-based security monitoring uses an initial database of known attack types but dynamically alters their signatures base on learned behavior of network traffic. A heuristic system uses algorithms to analyze the traffic passing through the network. Heuristic systems require more fine-tuning to prevent false positives in your network.

A behavior-based system looks for variations in behavior such as unusually high traffic, policy violations, and so on. By looking for deviations in behavior, it is able to recognize potential threats and respond quickly.
Similar to firewall access control rules, a rule-based security monitoring system relies on the administrator to create rules and determine the actions to take when those rules are transgressed.

References:
http://netsecurity.about.com/cs/hackertools/a/aa030504.htm
http://www.sans.org/security-resources/idfaq/
• CompTIA Security+ Study Guide: Exam SY0-301, Fifth Edition by Emmett Dulaney
• Mike Meyers’ CompTIA Security+ Certification Passport, Second Edition by T. J. Samuelle

http://neokobo.blogspot.com/2012/01/118-nids-and-nips.html

AS/400 (IBM iSeries, AS/400e, eServer iSeries/400)

The AS/400 – formally renamed the “IBM iSeries,” but still commonly known as AS/400 – is a midrange server designed for small businesses and departments in large enterprises and now redesigned so that it will work well in distributed networks with Web applications. The AS/400 uses the PowerPC microprocessor with its reduced instruction set computertechnology. Its operating system is called the OS/400. With multi-terabytes of disk storage and a Java virtual memory closely tied into the operating system, IBM hopes to make the AS/400 a kind of versatile all-purpose server that can replace PC servers and Web servers in the world’s businesses, competing with both Wintel and UNIX servers, while giving its present enormous customer base an immediate leap into the Internet.

 Ask your AS/400 questions at ITKnowledgeExchange.com

The AS/400, one of IBM’s greatest success stories, is widely installed in large enterprises at the department level, in small corporations, in government agencies, and in almost every industry segment. It succeeded another highly popular product, the System/36 and was itself based on a later, more sophisticated product, the System/38. AS/400 customers can choose from thousands of applications that have already been written and many have been “Web-enabled.” IBM points to the AS/400’s “uptime” of 99.9%.

The AS/400 comes with a database built-in. One widely-installed option is Domino (Noteswith a Web browser).

According to IBM, these are some important new uses for the AS/400:

  • Data warehousing: With multi-gigabytes of RAM and multi-terabytes of hard disk space, the AS/400 can be a repository for large amounts of company data to which data mining could be applied.
  • Java application development: With its closely integrated Java virtual machine and new tools designed by IBM for building commercial applications with Java, the AS/400 can be used as a development system.
  • Web and e-commerce serving: Equipped with a Web server and applications designed to support e-commerce (taking orders, tracking orders, providing service to customers, working with partners and suppliers) and with firewall capabilities, the AS/400 can handle Internet serving for a moderate-size company.
  • Corporate groupware services: Assuming that Domino and Notes have been included with the system, it’s designed to quickly provide a corporation with sophisticated e-mail, project file sharing, whiteboards, and electronic collaboration.

Reference: http://search400.techtarget.com/definition/AS-400

RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams

RAID stands for Redundant Array of Inexpensive (Independent) Disks.

On most situations you will be using one of the following four levels of RAIDs.

  • RAID 0
  • RAID 1
  • RAID 5
  • RAID 10 (also known as RAID 1+0)

This article explains the main difference between these raid levels along with an easy to understand diagram.

In all the diagrams mentioned below:

  • A, B, C, D, E and F – represents blocks
  • p1, p2, and p3 – represents parity

RAID LEVEL 0


Following are the key points to remember for RAID level 0.

  • Minimum 2 disks.
  • Excellent performance ( as blocks are striped ).
  • No redundancy ( no mirror, no parity ).
  • Don’t use this for any critical system.

RAID LEVEL 1

Following are the key points to remember for RAID level 1.

  • Minimum 2 disks.
  • Good performance ( no striping. no parity ).
  • Excellent redundancy ( as blocks are mirrored ).

RAID LEVEL 5


Following are the key points to remember for RAID level 5.

  • Minimum 3 disks.
  • Good performance ( as blocks are striped ).
  • Good redundancy ( distributed parity ).
  • Best cost effective option providing both performance and redundancy. Use this for DB that is heavily read oriented. Write operations will be slow.

RAID LEVEL 10

Following are the key points to remember for RAID level 10.

  • Minimum 4 disks.
  • This is also called as “stripe of mirrors”
  • Excellent redundancy ( as blocks are mirrored )
  • Excellent performance ( as blocks are striped )
  • If you can afford the dollar, this is the BEST option for any mission critical applications (especially databases).

Reference : http://www.thegeekstuff.com/2010/08/raid-levels-tutorial/

How to build a FREE Private Cloud using Microsoft Technologies….

This document will guide you through the process of setting up the bare minimum components to demo a Private Cloud environment using current release versions of Microsoft products and technologies. It is NOT meant for nor is it an ideal configuration for use in a production environment. If you have a Technet or MSDN subscription then you have all the software you need already. Otherwise you can download FREE TRIAL versions of all the necessary components from the Microsoft Technet Evaluation Center.

Once the installation and configuration are complete, you will be able to demo the use of System Center Virtual Machine Manager and the SCVMM Self Service Portal 2.0 to build and manage a Private Cloud. With additional software and hardware resources, this configuration can be expanded to include additional System Center Technologies to demonstrate a much broader Private Cloud implementation including monitoring, reporting, change management, deployment and more. There are free trial versions of all the System Center products at the Microsoft Technet Evaluation Center.

There is an assumption that you have at least a basic knowledge of the roles and services in Windows 2008 R2, a cursory knowledge of how to install SQL Server 2008 R2, and a basic understanding of how the System Center Virtual Machine Manager works. Additional documents and walkthroughs may be produced for more detail. If there is something you would like to have more information on, please comment to this blog post and let me know.

If you plan on doing this in a single sitting, bring plenty of your favorite caffeinated beverage, some good music to listen to, maybe even a good book, and lot of patience. There is a lot of “hurry up and wait” that takes place during this setup. Expect to spend 6-10 hours depending on how fast your hardware is and how efficient you are. This guide could be condensed even further to combine certain steps and reduce setup time slightly but I have opted to make it as fool proof as possible. If you follow this guide exactly, you should not see any errors or failures during the installation.

The resultant demo configuration does not provide for any failover or redundancy and is intended solely as a lightweight demo/test/learning environment. The concepts here can be used as a template to install a production Private Cloud, but please, do not implement this configuration in production without speaking to the appropriate persons that administer your network. If you implement this in production, you do so at your own risk and you should have an updated resume available.

Architecture:
Host Machine – Windows Server 2008 R2 + SP1 + all post SP1 Updates

Roles: Active Directory Domain Services, DNS Server, Hyper-V, Web Server (IIS)

Software: SQL Server 2008 R2 x64, System Center Virtual Machine Manager 2008 R2 Server Components and Administrator Console, SCVMM Self Service Portal 2.0

Guest VM’s – Once this install is complete, you can create whatever guest VM’s you like to use for testing and demoing. In a future document I will detail a list of resources you may wish to create so you have a relevant test and demo environment.

Hardware Requirements:

I personally recommend using a desktop computer because of the drive options available. However, a high-end laptop can be used. I have performed this install to both hardware platforms in the following configurations:

Laptop: Lenovo W510 (quad processor + hyper-threading), 16gigs RAM, (1) 7200rpm SATA drive for host operating system, (1) 140gig Solid State Drive for guest VM storage

***This is the platform I used when creating this document***

Pros: Compact, very portable

Cons: Disk I/O and potential CPU bottlenecks decreases performance. This can be mitigated by investing is higher end disk drive and/or a laptop with greater processing capabilities but increases the cost dramatically. Overall a more expensive solution even with lower end components.

Desktop: Quad-processor CPU, 16gigs RAM, (1) 7200rpm for host operating system, (2 or more ) 7200rpm+ SATA drives for guest VM storage (these drives can be striped as RAID-0 for additional performance *or* they can be formatted independently and place guest VM’s on separate spindles. For my desktop implementation at home I am using the RAID-0 option)

Pros: Better performance due to disk drive configuration options. Lower cost of desktop PC components make this a less expensive solution even with higher end hardware.

Cons: More of a fixed solution, less portable. Could potentially use an ultra-mini case or small “media center” type case to increase portability, however, desk top components are not designed to be moved around a lot so you are at a higher risk of component failure.

I also *highly recommend* a high capacity dedicated external storage device for backup up configurations along the way. The entirety of this private cloud configuration is relatively simple but the overall process is time consuming. The more frequently you backup/snapshot at key stages the less likely you will be to spend rebuilding from scratch.

Software Requirements:

If you have a Technet or MSDN subscription you have everything you need. If you do not have a Technet or MSDN subscription you can use free trial software for everything. Just be mindful of the individual timebombs and make note of when things expire. Using the pieces below you should be able to run for 180 days from the day the Host machine OS is installed.

Windows Server 2008 R2 with SP1 Trial

System Center Virtual Machine Manager 2008 R2 with SP1 Trial

Microsoft SQL Server 2008 R2 Trial (get the 64bit version)

Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 with SP1

Suggested Pre-Reading/Learning:

An assumption is being made that you are familiar with installing and configuring Windows Server 2008 R2 and its related Roles and Features. If not, then you should bookmark and leverage the following –

Microsoft Technet Windows Server TechCenter

Additional Resources:

Microsoft SQL Server 2008 R2 TechCenter

System Center Virtual Machine Manager 2008 R2 TechCenter

System Center Virtual Machine Manager Self-Service Portal 2.0 TechCenter

The Heavy Lifting – Installing the components

This section of the guide will walk you through the installation of each and every piece of the Microsoft Private Cloud solution. I have chosen an abbreviated rapid fire approach to this install. There are no screen shots. I do not go into detail around the choices made on the selection screens. If the options on a screen are not discussed in the document, you can assume the default selections will suffice.

There is a lot of opportunity to customize things along the way. There is a lot of opportunity to poke around and make changes during setup or while waiting on files to copy. I recommend that you NOT do this if you can avoid it. This document should provide a 100% success rate with ZERO errors during install if you follow it exactly. If you choose to stray and make changes during the install, you do so at the risk of your own time invested in this process.

Grab that caffeinated beverage. Take a big sip. Start your music. Take a deep breath. Here we go….

Install the Hyper-V Host

Windows Server 2008 R2 is the foundation up which we build the entire private cloud. The leverage the built-in Hyper-V hypervisor for virtualizing the servers, clients and their applications that can then be served up through the self-service portal. It is absolutely essential that the base server is installed properly and is 100% stable.

Pre-install hardware configuration – Ensure that you have enabled virtualization support in the BIOS of your computer. How this is managed/enabled depends on the PC Manufacturer and the BIOS used. You should also make sure the Data Execution Prevention (DEP) is active. There is a great blog post that talks about how to do this here — http://blogs.technet.com/b/iftekhar/archive/2010/08/09/enable-hardware-settings-in-bios-to-run-hyper-v.aspx

*I recommend rebooting after each line item below*

Install Windows 2008 R2

Install any BIOS updates/hardware drivers/manufacturer updates for your system

Install SP1 (can be skipped if you installed Windows 2008 R2 + SP1 integrated)

Install all post-SP1 updates from Windows Update

*after each update install completes, reboot and run

Windows Update until no further updates are offered*

Optional – Rename host to desired friendly name

Install Necessary Windows Server Roles and Features

Add the Role: Active Directory Domain Services

Run the Active Directory Domain Services installation wizard (dcpromo.exe)

Create a new domain in a new forest

Supply FQDN of the new forest root domain (ie; privatecloud.local)

Supply Domain NetBIOS name (ie; PRIVATECLOUD)

Select Forest Functional Level (Windows 2003 is fine)

Select Domain Functional Level (Windows 2003 is fine)

Allow DNS to be installed (Assign Static IP if necessary)

(***I assigned a static IP address/mask for my local subnet and pointed to my default gateway. I then configured DNS with forwarders of 4.2.2.1 and 4.2.2.2 – These are AT&T’s public DNS servers. This allows for Internet access to download Windows Updates or other software needed***)

Location for Database, Log Files, SYSVOL = Default

Assign Password for Directory Services Restore Mode

Complete Wizard and Reboot

Add the Role: Hyper-V

Create Virtual Network: Attach to local Ethernet

Complete Wizard and Reboot

Allow Wizard to Complete and Reboot

Install Web Server (IIS) Role

IIS is required by the Self Service Portal 2.0. The portal also requires specific Web Server (IIS) role services and the Message Queueing Feature to be enabled.

Add the Role: (Web Server IIS) – Next

Role Services – Select:

Static Content

Default Document

ASP.NET

.NET Extensibility

ISAPI Extensions

ISAPI Filters

Request Filtering

Windows Authentication

IIS6 Metabase Compatibility

Confirmation – Install

Add the Feature: Message Queueing – Next

Confirmation – Install

Windows Server 2008 R2 Foundation is now complete!

The Windows Server 2008 R2 + Hyper-V host is now complete. There are a few (not really) optional steps below you may wish to take just for your own sanity.

Optional (recommended) – Install Windows Server Backup Features

Optional (recommended) – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice)

Install SQL Server 2008 R2

SQL Server 2008 R2 is used for storing configuration information for System Center Virtual Machine Manager and the SCVMM Self-Service Portal. You do not need to be a SQL guru to get things up and running or even for day to day operations. You can pretty much forget about SQL except for routine patching. The exception to this (there are always exceptions) is if you use this document to implement a Private Cloud in a production environment using an existing production SQL Server. In that case, I beg you to speak to your SQL Admin *BEFORE* doing anything with SQL. You have been warned.

Launch SQL setup

New Installation or add features to an existing installation

Enter Product key or Specify a free edition

Accept License

Setup Support Files – Install

Setup Support Rules – Address any issues – Next

SQL Server Feature Installation – Next

Feature Selection – Select

Database Engine Services

Management Tools Basic

Default paths – Next

Installation Rules – Next

Default Instance (MSSQLSERVER) – Next

Disk Space Requirements – Next

Use the same account for all SQL server services

(if this host will be connecting to a network or the Internet then I suggest following SQL security guidelines and create unique accounts for each service. If you will only be using this for non-Network connected demonstrations, you can use the domainname\Administrator account for simplicity)

Supply credentials – Next

Windows authentication mode – Add current user – domainname\Administrator – Next

Error Reporting – Your choice – Next

Installation Configuration Rules – Next

Ready to Install – Summary – Install

Complete – Close

Windows Update – Check for Updates – Install – Reboot

(This one takes quite a while. Go get something to eat.)

Install System Center Virtual Machine Manager R2 + SP1

VMM Server Component

Start SCVMM Setup – Setup – VMM Server

Accept License – Next

CEIP – Your choice – Next

Product Registration – Fill in – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

SQL Server Settings – Use a supported version of SQL Server:

Server name: <name of localhost>

Check – Use the following credentials:

User name: <domain>\Administrator

Password: <password>

Select or enter a SQL instance: Drop down to MSSQLSERVER

Select or enter a database:  <enter a database name; ie; SCVMMDB>

Check – Create a new database

Library Share Settings

Create a new library share – Defaults are fine – Next

Installation Settings

Ports – Defaults are fine

VMM Server Account – Local System is fine – Next

Summary of Settings – Install

Install the VMM Administrator Console

Once the Virtual Machine Manager Administrator Console is installed, this will become the primary interface used when dealing with your virtualization infrastructure. There will be times you will want or need to go back to the standard Hyper-V MMC, but you should get comfortable with the SCVMM Administrator console for day-to-day operations.

Start SCVMM Setup – Setup – VMM Administrator Console

Accept License – Next

CEIP – Your choice – Next

Prerequisite Check – Next

Installation Location – Default is fine – Next

Port – 8100 – Default is fine

Summary of Settings – Install

Windows Update – Check for Updates – Install – Reboot

Take a deep breath. Switch from caffeine to ….something more calming. You are almost done.

Almost….

Install the SCVMM Self-Service Portal 2.0 with SP1

***Note – You probably noticed an option to install a Self Service Portal from with the SCVMM Setup interface.DO NOT INSTALL THIS VERSION. It is an older version and does not provide the most current functionality. Download the SSP 2.0 + SP1 version from the link in the “Software Requirements” section of this document.***

The Self-Service Portal is one of the defining features of the Microsoft Private Cloud. Through this portal, administrators can create resource pools consisting of networks, storage, load balancers, virtual machine templates and domains. Administrators can then create and manage business units who can use the self-service portal to requests these pools of resources and create them on demand.

Start SSP2.0 Setup

Getting Starter – (License page) – Accept – Next

Select

VMMSSP Server Component

VMMSSP Website Component

Next

Prerequisite Page – Should be all green – Next

VMMSSP Files – Default is fine – Next

Database Server: <localhost name>

Click – Get Instances

SQL Server Instance: Default

Credentials: Connect using Windows Authentication

Create a new Database or…..: Create a new database

Next

Provide an account for the server component

User Name: Administrator

Password: <password>

Domain: <domainname>

Test Account – Next

Configure WCF Endpoints – Defaults are fine – Next

Provide a list of Database Administrators

<domainname>\Administrator

Next

Configure the IIS web site for the VMMSSP website component

IIS website name:  VMMSSP <default>

Port Number:  81  <you cannot use 80 since it is assigned to the default web site>

Application pool name:  VMMSSPAppPool  <default>

User Name:  Administrator

Password :  <password>

Domain:  <domainname>

Next

Installation Summary – Install – Yes to Dialog

Close the final window.

Windows Update – Check for Updates – Install – Reboot

Once logged in:

Delete any setup files or unnecessary files/data you will not use for demonstration purposes

Empty the Recycle Bin

NOT OPTIONAL – Perform Bare Metal Recovery Backup to external storage using Windows Backup (or the backup system of your choice). Trust me. At this point you have 6-10 hours invested in this setup and you do NOT want to have to start over.

You now have the hardware and software in place to demo a private cloud!

However, a Private Cloud is more about the HOW you use the infrastructure to create value, provide self-service, reduce overheard, automate resource creation and ultimately – reduce costs.

In the next document I produce, I will define a list of resources to create using the Hyper-V MMC, System Center Virtual Machine Manager, and the SCVMM Self-Service portal. I will then do a few recorded demos with these resources that you can customize for your own demonstration purposes.

Call To Action

Download a hard copy of this document for your own reference –

Bookmark my blog and watch for more posts and screen casts on Private Cloud. Here are some of the Planned Posts/Content/Screencasts I am working on:

Configuring Basic Resources for use in a Private Cloud

Creating virtual hard disks

Creating virtual machines

Creating templates in SCVMM

Creating Hardware and OS profiles in SCVMM

Configuring and using the Self-Service Portal 2.0

Initial Configuration

Creating and managing Infrastructures

Working with Virtual Machines

Managing User Roles and Business Units

Walking through the Request process

If there is a particular feature or process you would like to know more about, please contact me through a comment to this post or in email and we will discuss getting it produced.

For now, have fun playing with your new Private Cloud! (AFTER that bare metal recovery backup!)

Cheers!

Reference : http://blogs.technet.com/b/chrisavis/archive/2011/09/05/how-to-build-a-free-private-cloud-using-microsoft-technologies.aspx

How to build a private cloud

If you’re nervous about running your business applications on a public cloud, many experts recommend that you take a spin around a private cloud first.

Cloud: Ready or not?

But building and managing a cloud within your data center is not just another infrastructure project, says Joe Tobolski, director of cloud computing at Accenture.

“A number of technology companies are portraying this as something you can go out and buy – sprinkle a little cloud-ulator powder on your data center and you have an internal cloud,” he says. “That couldn’t be further from the truth.”

An internal, on-premise private cloud is what leading IT organizations have been working toward for years. It begins with data center consolidation, rationalization of OS, hardware and software platforms, and virtualization up and down the stack – servers, storage and network, Tobolski says.

Elasticity and pay-as-you-go pricing are guiding principles, which imply standardization, automation and commoditization of IT, he adds.

And it goes way beyond about infrastructure and provisioning resources, Tobolski adds. “It’s about the application build and the user’s experience with IT, too.”

Despite all the hype, we’re at a very early stage when it comes to internal clouds. According to Forrester Research, only 5% of large enterprises globally are even capable of running an internal cloud, with maybe half of those actually having one, says James Staten, principal analyst with the firm.

But if you’re interested in exploring private cloud computing, here’s what you need to know.

managing cloud computing

First steps: Standardization, automation, shared resources

Forrester’s three tenets for building an internal cloud are similar to Accenture’s precepts for next-generation IT.

To build an on-premises cloud, you must have standardized – and documented — procedures for operating, deploying and maintaining that cloud environment, Staten says.

Most enterprises are not nearly standardized enough, although companies moving down the IT Information Library (ITIL) path for IT service management are closer to this objective than others, he adds.

Standardized operating procedures that allow efficiency and consistency are critical for the next foundational layer, which is automation. “You have to be trusting of and a big-time user of automation technology,” Staten says. “That’s usually a big hurdle for most companies.”

Automating deployment is probably the best place to start because that enables self-service capabilities. And for a private cloud, this isn’t Amazon-style in which any developer can deploy virtual machines (VM) at will. “That’s chaos in a corporation and completely unrealistic,” Staten says.

Rather, for a private cloud, self-service means that an enterprise has established an automated workflow whereby resource requests go through an approvals process.

Once approved, the cloud platform automatically deploys the specified environment. More often, private cloud self-service is about developers asking for “three VMs of this size, a storage volume of this size and this much bandwidth,” Staten says. Self-service for end users seeking resources from the internal company cloud would be “I need a SharePoint volume or a file share.”

Thirdly, building an internal cloud means sharing resources – “and that usually knocks the rest of the companies off the list,” he says.

This is not about technology. “It’s organizational — marketing doesn’t want to share servers with HR, and finance won’t share with anybody. When you’re of that mindset, it’s hard to operate a cloud. Clouds are highly inefficient when resources aren’t shared,” Staten says.

Faced with that challenge, IT Director Marcos Athanasoulis has come up with a creative way to get participants comfortable with the idea of sharing resources on the Linux-based cloud infrastructure he oversees at Harvard Medical School (HMS) in Boston. It’s a contributed hardware approach, he says.

At HMS, which Athanasoulis calls the land of 1,000 CIOs, IT faces a bit of a unique challenge. It doesn’t have the authority to tell a lab what technology to use. It has some constraints in place, but if a lab wants to deploy its own infrastructure, it can. So when HMS approached the cloud concept four years ago, it did so wanting “a model where we could have capacity available in a shared way that the school paid for and subsidized so that folks with small needs could come in and get what they needed to get their research done but also be attractive to those labs that would have wanted to build their own high-performance computing or cloud environments if we didn’t offer a suitable alternative.”

With this approach, if a lab bought 100 nodes in the cloud, it got guaranteed access to that capacity. But if that capacity was idle, others’ workloads could run on it, Athanasoulis says.

“We told them – you own this hardware but if you let us integrate into the cloud, we’ll manage it for you and keep it updated and patched. But if you don’t like how this cloud is working, you can take it away.” He adds, “That turned out to be a good selling point, and not once [in four years] has anybody left the cloud.”

To support the contributed hardware approach, HMS uses Platform Computing’s Platform LSF workload automation software, Athanasoulis says. “The tool gives us the ability to set up queues and suspend jobs that are on the contributed hardware nodes, so that the people who own the hardware get guaranteed access and that suspended jobs get restored.”

Don’t proceed until you understand your services

If clouds are inefficient when resources aren’t shared, they can be outright pointless if services aren’t considered before all else. IBM, for example, begins every potential cloud engagement with an assessment of the different types of workloads and the risk, benefit and cost of moving each to different cloud models, says Fausto Bernardini, director IT strategy and architecture, cloud portfolio services, at IBM.

Whether a workload has affinity with a private, public or hybrid model depends on a number of attributes, including such key ones as compliance and security but others, too, such as latency and interdependencies of components in applications, he says.

Many enterprises think about building a private cloud from a product perspective before they consider services and service requirements – and that’s the exact opposite of where to start, says Tom Bittman, vice president and distinguished analyst at Gartner.

“If you’re really going to build a private cloud, you need to know what your services are, and what the [service-level agreements], costs and road maps are for each of those. This is really about understanding whether the services are going toward the cloud computing style or not,” he says.

Common services with relatively static interfaces, even if your business is highly reliant on them, are those you should be considering for cloud-style computing, private or public, Bittman says. E-mail is one example.

“I may use it a lot, but it’s not intertwined with the inner workings of my company. It’s the kind of service moving in the direction of interface and independence – I don’t want it to be integrated tightly with the company. I want to make it as separate as possible, easy to use, available from self-service interface,” Bittman says. “And if I’ve customized this type of service over time, I’ve got to undo that and make it as standard as possible.”

Conversely, services that define a business and are constantly the focus of innovative initiatives are not cloud contenders, Bittman says. “The goal for these services is intimacy and integration, and they are never going to the cloud. They may use cloud functions at a low level, like for raw compute, but the interface to the company isn’t going to be a cloud model.”

Only once you understand which services are right for the cloud and how long it might take you to get them to a public-readiness state will you be ready to build a business case and start to look at building a private cloud from a technology perspective, he says.

The final tiers: Service management and access management

Toward that end, Gartner has defined four tiers of components for building a private cloud.

At the bottom sits the resource tier comprising infrastructure, platforms or software. Raw virtualization comes to mind immediately, but VMs aren’t the only option – as long as you’ve got a mechanism for turning resources into a pool you’re on the way, Bittman says. Rapid re-provisioning technology is another option, for example.

Above the resource pool sits the resource management tier. “This is where I manage that pool in an automated manner,” says Bittman, noting that for VMware environments, this is about using VMware Distributed Resource Scheduler.

“These two levels are fairly mature,” Bittman says. “You can find products for these available in the market, although there’s not a lot of competition yet at the resource management tier.”

Next comes the service management tier. “This is where there’s more magic required,” he says. “I need something that lets me do service governance, something that lets me convert pools of resources into service levels. In the end, I need to be able to present to the user some kind of service-level interface that says ‘performance’ or ‘availability’ and have this services management tier for delivering on that.”

As you think about building your private cloud, understand that the gap between need and product availability is pretty big, Bittman says. “VMware, for example, does a really good job of allowing you to manage your virtualization pool, but it doesn’t know anything about services.VMware’s vCenter AppSpeed is one early attempt to get started on this,” he adds.

“What we really need is a good service governor, and that doesn’t exist yet,” says Bittman.

Sitting atop it all is the access management tier, which is all about the user self-service interface. “It presents a service catalog, and gives users all the knobs to turn and lets you manage subscribers,” Bittman says. “The interface has to be tied in some way to costing and chargeback, or at least metering – it ties to the service management tier at that level.”

Chargeback is a particularly thorny challenge for private cloud builders, but one that they can’t ignore for long. “It’s tricky from a technology perspective — what do I charge based on? But also from political and cultural perspectives,” Bittman says. “But frankly, if I’m going to move to cloud computing I’m going to move to a chargeback model so that’s going to be one of the barriers that needs to be broken anyways.”

In the end, it’s about the business

And while cloud-builders need to think in terms of elasticity, automation, self-service and chargeback, they shouldn’t be too rigid about the distinctions at this stage of cloud’s evolution, Bittman says. “We will see a lot of organizations doing pure cloud and a lot doing pure non-cloud, and a whole lot of stuff somewhere in the middle. What it all really comes down to is, ‘Is there benefit?'”

Wentworth-Douglass Hospital, in Dover, N.H., for example, is building what it calls a private cloud using a vBlock system from Cisco, EMC and VMware. But it’s doing so more with an eye toward abstraction of servers and not so much on the idea of self-provisioning or software-as-a-service (SaaS), says Scott Heffner, network operations manager for the hospital.

“Maybe we’ll get to SaaS eventually, and we are doing as much automation as we can, but I’m introducing concepts slowly to the organization because the cloud model is so advanced that to get the whole organization to conceive of and understand it right off the bat is too much,” he says

Reference : http://www.networkworld.com/supp/2010/ndc3/051010-ndc-cloud.html?page=3

Multiplexed Transport Layer Security

In information technology, the Transport Layer Security (TLS) protocol provides connection security with mutual authentication, data confidentiality and integrity, key generation and distribution, and security parameters negotiation. However, missing from the protocol is a way to multiplex application data over a single TLS session.

Multiplexed Transport Layer Security (MTLS) protocol is a new TLS sub-protocol running over TLS or DTLS. The MTLS design provides application multiplexing over a single TLS (or DTLS) session. Therefore, instead of associating a TLS connection with each application, MTLS allows several applications to protect their exchanges over a single TLS session.

MTLS is currently in draft stage http://tools.ietf.org/html/draft-badra-hajjeh-mtls-05 which expired in October 2009.

Reference : http://en.wikipedia.org/wiki/Multiplexed_Transport_Layer_Security