Create and Populate Date Dimension for Data Warehouse

Date dimension plays an important role in your data warehouse designing, it provides the ability to study behavior and trend of your data over a period of time.

You can study your data by grouping them using various fields of date dimension.

For example:

If I want to analyze my data of total sales by each month of the year, or show total sales by each quarter of the year, or show me on which days total sales takes place more in the entire year or month.

After implementing the complete solution in data warehouse, the relationship of date dimension gives you all this facility to do slice and dice of your data.

So as an initial step, you need to design your date dimension, time dimension and populate them with range of values.

For designing of time dimension, you can refer to my other tip posted on CodeProject, “Design and Populate Time Dimension with 24 Hour plus Values”.

This date dimension will have values of date stored as per various date formats used across the world, like “dd-MM-yyyy” is used in Europe, UK, India, etc. while “MM-dd-yyyy” format is used in US.

 

/********************************************************************************************/
--Specify Start Date and End date here
--Value of Start Date Must be Less than Your End Date 

DECLARE @StartDate DATETIME = '01/01/2013' --Starting value of Date Range
DECLARE @EndDate DATETIME = '01/01/2015' --End Value of Date Range

--Temporary Variables To Hold the Values During Processing of Each Date of Year
DECLARE
	@DayOfWeekInMonth INT,
	@DayOfWeekInYear INT,
	@DayOfQuarter INT,
	@WeekOfMonth INT,
	@CurrentYear INT,
	@CurrentMonth INT,
	@CurrentQuarter INT

/*Table Data type to store the day of week count for the month and year*/
DECLARE @DayOfWeek TABLE (DOW INT, MonthCount INT, QuarterCount INT, YearCount INT)

INSERT INTO @DayOfWeek VALUES (1, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (2, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (3, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (4, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (5, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (6, 0, 0, 0)
INSERT INTO @DayOfWeek VALUES (7, 0, 0, 0)

--Extract and assign various parts of Values from Current Date to Variable

DECLARE @CurrentDate AS DATETIME = @StartDate
SET @CurrentMonth = DATEPART(MM, @CurrentDate)
SET @CurrentYear = DATEPART(YY, @CurrentDate)
SET @CurrentQuarter = DATEPART(QQ, @CurrentDate)

/********************************************************************************************/
--Proceed only if Start Date(Current date ) is less than End date you specified above

WHILE @CurrentDate < @EndDate
BEGIN
 
/*Begin day of week logic*/

         /*Check for Change in Month of the Current date if Month changed then 
          Change variable value*/
	IF @CurrentMonth != DATEPART(MM, @CurrentDate) 
	BEGIN
		UPDATE @DayOfWeek
		SET MonthCount = 0
		SET @CurrentMonth = DATEPART(MM, @CurrentDate)
	END

        /* Check for Change in Quarter of the Current date if Quarter changed then change 
         Variable value*/

	IF @CurrentQuarter != DATEPART(QQ, @CurrentDate)
	BEGIN
		UPDATE @DayOfWeek
		SET QuarterCount = 0
		SET @CurrentQuarter = DATEPART(QQ, @CurrentDate)
	END
       
        /* Check for Change in Year of the Current date if Year changed then change 
         Variable value*/
	

	IF @CurrentYear != DATEPART(YY, @CurrentDate)
	BEGIN
		UPDATE @DayOfWeek
		SET YearCount = 0
		SET @CurrentYear = DATEPART(YY, @CurrentDate)
	END
	
        -- Set values in table data type created above from variables 

	UPDATE @DayOfWeek
	SET 
		MonthCount = MonthCount + 1,
		QuarterCount = QuarterCount + 1,
		YearCount = YearCount + 1
	WHERE DOW = DATEPART(DW, @CurrentDate)

	SELECT
		@DayOfWeekInMonth = MonthCount,
		@DayOfQuarter = QuarterCount,
		@DayOfWeekInYear = YearCount
	FROM @DayOfWeek
	WHERE DOW = DATEPART(DW, @CurrentDate)
	
/*End day of week logic*/


/* Populate Your Dimension Table with values*/
	
	INSERT INTO [dbo].[DimDate]
	SELECT
		
		CONVERT (char(8),@CurrentDate,112) as DateKey,
		@CurrentDate AS Date,
		CONVERT (char(10),@CurrentDate,103) as FullDateUK,
		CONVERT (char(10),@CurrentDate,101) as FullDateUSA,
		DATEPART(DD, @CurrentDate) AS DayOfMonth,
		--Apply Suffix values like 1st, 2nd 3rd etc..
		CASE 
			WHEN DATEPART(DD,@CurrentDate) IN (11,12,13) _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'th'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 1 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'st'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 2 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'nd'
			WHEN RIGHT(DATEPART(DD,@CurrentDate),1) = 3 _
			THEN CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'rd'
			ELSE CAST(DATEPART(DD,@CurrentDate) AS VARCHAR) + 'th' 
			END AS DaySuffix,
		
		DATENAME(DW, @CurrentDate) AS DayName,
		DATEPART(DW, @CurrentDate) AS DayOfWeekUSA,

		-- check for day of week as Per US and change it as per UK format 
		CASE DATEPART(DW, @CurrentDate)
			WHEN 1 THEN 7
			WHEN 2 THEN 1
			WHEN 3 THEN 2
			WHEN 4 THEN 3
			WHEN 5 THEN 4
			WHEN 6 THEN 5
			WHEN 7 THEN 6
			END 
			AS DayOfWeekUK,
		
		@DayOfWeekInMonth AS DayOfWeekInMonth,
		@DayOfWeekInYear AS DayOfWeekInYear,
		@DayOfQuarter AS DayOfQuarter,
		DATEPART(DY, @CurrentDate) AS DayOfYear,
		DATEPART(WW, @CurrentDate) + 1 - DATEPART(WW, CONVERT(VARCHAR, _
		DATEPART(MM, @CurrentDate)) + '/1/' + CONVERT(VARCHAR, _
		DATEPART(YY, @CurrentDate))) AS WeekOfMonth,
		(DATEDIFF(DD, DATEADD(QQ, DATEDIFF(QQ, 0, @CurrentDate), 0), _
		@CurrentDate) / 7) + 1 AS WeekOfQuarter,
		DATEPART(WW, @CurrentDate) AS WeekOfYear,
		DATEPART(MM, @CurrentDate) AS Month,
		DATENAME(MM, @CurrentDate) AS MonthName,
		CASE
			WHEN DATEPART(MM, @CurrentDate) IN (1, 4, 7, 10) THEN 1
			WHEN DATEPART(MM, @CurrentDate) IN (2, 5, 8, 11) THEN 2
			WHEN DATEPART(MM, @CurrentDate) IN (3, 6, 9, 12) THEN 3
			END AS MonthOfQuarter,
		DATEPART(QQ, @CurrentDate) AS Quarter,
		CASE DATEPART(QQ, @CurrentDate)
			WHEN 1 THEN 'First'
			WHEN 2 THEN 'Second'
			WHEN 3 THEN 'Third'
			WHEN 4 THEN 'Fourth'
			END AS QuarterName,
		DATEPART(YEAR, @CurrentDate) AS Year,
		'CY ' + CONVERT(VARCHAR, DATEPART(YEAR, @CurrentDate)) AS YearName,
		LEFT(DATENAME(MM, @CurrentDate), 3) + '-' + CONVERT(VARCHAR, _
		DATEPART(YY, @CurrentDate)) AS MonthYear,
		RIGHT('0' + CONVERT(VARCHAR, DATEPART(MM, @CurrentDate)),2) + _
		CONVERT(VARCHAR, DATEPART(YY, @CurrentDate)) AS MMYYYY,
		CONVERT(DATETIME, CONVERT(DATE, DATEADD(DD, - (DATEPART(DD, _
		@CurrentDate) - 1), @CurrentDate))) AS FirstDayOfMonth,
		CONVERT(DATETIME, CONVERT(DATE, DATEADD(DD, - (DATEPART(DD, _
		(DATEADD(MM, 1, @CurrentDate)))), DATEADD(MM, 1, _
		@CurrentDate)))) AS LastDayOfMonth,
		DATEADD(QQ, DATEDIFF(QQ, 0, @CurrentDate), 0) AS FirstDayOfQuarter,
		DATEADD(QQ, DATEDIFF(QQ, -1, @CurrentDate), -1) AS LastDayOfQuarter,
		CONVERT(DATETIME, '01/01/' + CONVERT(VARCHAR, DATEPART(YY, _
		@CurrentDate))) AS FirstDayOfYear,
		CONVERT(DATETIME, '12/31/' + CONVERT(VARCHAR, DATEPART(YY, _
		@CurrentDate))) AS LastDayOfYear,
		NULL AS IsHolidayUSA,
		CASE DATEPART(DW, @CurrentDate)
			WHEN 1 THEN 0
			WHEN 2 THEN 1
			WHEN 3 THEN 1
			WHEN 4 THEN 1
			WHEN 5 THEN 1
			WHEN 6 THEN 1
			WHEN 7 THEN 0
			END AS IsWeekday,
		NULL AS HolidayUSA, Null, Null

	SET @CurrentDate = DATEADD(DD, 1, @CurrentDate)
END

/********************************************************************************************/
 
Step 3.
Update Values of Holiday as per UK Government Declaration for National Holiday.

/*Update HOLIDAY fields of UK as per Govt. Declaration of National Holiday*/
	
-- Good Friday  April 18 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Good Friday'
	WHERE [Month] = 4 AND [DayOfMonth]  = 18

-- Easter Monday  April 21 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Easter Monday'
	WHERE [Month] = 4 AND [DayOfMonth]  = 21

-- Early May Bank Holiday   May 5 
   UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Early May Bank Holiday'
	WHERE [Month] = 5 AND [DayOfMonth]  = 5

-- Spring Bank Holiday  May 26 
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Spring Bank Holiday'
	WHERE [Month] = 5 AND [DayOfMonth]  = 26

-- Summer Bank Holiday  August 25 
    UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Summer Bank Holiday'
	WHERE [Month] = 8 AND [DayOfMonth]  = 25

-- Boxing Day  December 26  	
    UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Boxing Day'
	WHERE [Month] = 12 AND [DayOfMonth]  = 26	

--CHRISTMAS
	UPDATE [dbo].[DimDate]
		SET HolidayUK = 'Christmas Day'
	WHERE [Month] = 12 AND [DayOfMonth]  = 25

--New Years Day
	UPDATE [dbo].[DimDate]
		SET HolidayUK  = 'New Year''s Day'
	WHERE [Month] = 1 AND [DayOfMonth] = 1

--Update flag for UK Holidays 1= Holiday, 0=No Holiday
	
	UPDATE [dbo].[DimDate]
		SET IsHolidayUK  = CASE WHEN HolidayUK   IS NULL _
		THEN 0 WHEN HolidayUK   IS NOT NULL THEN 1 END
		
 
Step 4.
Update Values of Holiday as per USA Govt. Declaration for National Holiday.

/*Update HOLIDAY Field of USA In dimension*/
	
 	/*THANKSGIVING - Fourth THURSDAY in November*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Thanksgiving Day'
	WHERE
		[Month] = 11 
		AND [DayOfWeekUSA] = 'Thursday' 
		AND DayOfWeekInMonth = 4

	/*CHRISTMAS*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Christmas Day'
		
	WHERE [Month] = 12 AND [DayOfMonth]  = 25

	/*4th of July*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Independance Day'
	WHERE [Month] = 7 AND [DayOfMonth] = 4

	/*New Years Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'New Year''s Day'
	WHERE [Month] = 1 AND [DayOfMonth] = 1

	/*Memorial Day - Last Monday in May*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Memorial Day'
	FROM [dbo].[DimDate]
	WHERE DateKey IN 
		(
		SELECT
			MAX(DateKey)
		FROM [dbo].[DimDate]
		WHERE
			[MonthName] = 'May'
			AND [DayOfWeekUSA]  = 'Monday'
		GROUP BY
			[Year],
			[Month]
		)

	/*Labor Day - First Monday in September*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Labor Day'
	FROM [dbo].[DimDate]
	WHERE DateKey IN 
		(
		SELECT
			MIN(DateKey)
		FROM [dbo].[DimDate]
		WHERE
			[MonthName] = 'September'
			AND [DayOfWeekUSA] = 'Monday'
		GROUP BY
			[Year],
			[Month]
		)

	/*Valentine's Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Valentine''s Day'
	WHERE
		[Month] = 2 
		AND [DayOfMonth] = 14

	/*Saint Patrick's Day*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Saint Patrick''s Day'
	WHERE
		[Month] = 3
		AND [DayOfMonth] = 17

	/*Martin Luthor King Day - Third Monday in January starting in 1983*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Martin Luthor King Jr Day'
	WHERE
		[Month] = 1
		AND [DayOfWeekUSA]  = 'Monday'
		AND [Year] >= 1983
		AND DayOfWeekInMonth = 3

	/*President's Day - Third Monday in February*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'President''s Day'
	WHERE
		[Month] = 2
		AND [DayOfWeekUSA] = 'Monday'
		AND DayOfWeekInMonth = 3

	/*Mother's Day - Second Sunday of May*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Mother''s Day'
	WHERE
		[Month] = 5
		AND [DayOfWeekUSA] = 'Sunday'
		AND DayOfWeekInMonth = 2

	/*Father's Day - Third Sunday of June*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Father''s Day'
	WHERE
		[Month] = 6
		AND [DayOfWeekUSA] = 'Sunday'
		AND DayOfWeekInMonth = 3

	/*Halloween 10/31*/
	UPDATE [dbo].[DimDate]
		SET HolidayUSA = 'Halloween'
	WHERE
		[Month] = 10
		AND [DayOfMonth] = 31

	/*Election Day - The first Tuesday after the first Monday in November*/
	BEGIN
	DECLARE @Holidays TABLE (ID INT IDENTITY(1,1), _
	DateID int, Week TINYINT, YEAR CHAR(4), DAY CHAR(2))

		INSERT INTO @Holidays(DateID, [Year],[Day])
		SELECT
			DateKey,
			[Year],
			[DayOfMonth] 
		FROM [dbo].[DimDate]
		WHERE
			[Month] = 11
			AND [DayOfWeekUSA] = 'Monday'
		ORDER BY
			YEAR,
			DayOfMonth 

		DECLARE @CNTR INT, @POS INT, @STARTYEAR INT, @ENDYEAR INT, @MINDAY INT

		SELECT
			@CURRENTYEAR = MIN([Year])
			, @STARTYEAR = MIN([Year])
			, @ENDYEAR = MAX([Year])
		FROM @Holidays

		WHILE @CURRENTYEAR <= @ENDYEAR
		BEGIN
			SELECT @CNTR = COUNT([Year])
			FROM @Holidays
			WHERE [Year] = @CURRENTYEAR

			SET @POS = 1

			WHILE @POS <= @CNTR
			BEGIN
				SELECT @MINDAY = MIN(DAY)
				FROM @Holidays
				WHERE
					[Year] = @CURRENTYEAR
					AND [Week] IS NULL

				UPDATE @Holidays
					SET [Week] = @POS
				WHERE
					[Year] = @CURRENTYEAR
					AND [Day] = @MINDAY

				SELECT @POS = @POS + 1
			END

			SELECT @CURRENTYEAR = @CURRENTYEAR + 1
		END

		UPDATE [dbo].[DimDate]
			SET HolidayUSA  = 'Election Day'				
		FROM [dbo].[DimDate] DT
			JOIN @Holidays HL ON (HL.DateID + 1) = DT.DateKey
		WHERE
			[Week] = 1
	END
	--set flag for USA holidays in Dimension
	UPDATE [dbo].[DimDate]
SET IsHolidayUSA = CASE WHEN HolidayUSA  IS NULL THEN 0 WHEN HolidayUSA  IS NOT NULL THEN 1 END
/*****************************************************************************************/


Reference: http://www.codeproject.com/Articles/647950/Create-and-Populate-Date-Dimension-for-Data-Wareho
Advertisements

Create & Populate Time Dimension with 24 Hour+ Values

A common task most of us face while setting up a new data warehouse is creating a time dimension.

This tip will especially help those people who work in Business Intelligence and whenever as a starting point they need to set new data warehouse, during this time they need to create and fill their time dimension with the necessary values.

I have searched the internet to find T-SQL script which can create and fill time dimension with 24 hour plus values. I did not find any readymade script, then I invested my time to create this script and am now sharing with all so that it can help everyone.

The given time dimension script will create table of time dimension and populate it with appropriate values. It also creates time buckets in table and fills it with group values, so that the user can perform aggregation of data using various combinations of hourly time buckets or day time buckets and they can do analysis of data using these time buckets and can do study of trend over the entire day.


CREATE TABLE [dbo].[DimTime](
[TimeKey] [int] NOT NULL,
[TimeAltKey] [int] NOT NULL,
[Time30] [varchar](8) NOT NULL,
[Hour30] [tinyint] NOT NULL,
[MinuteNumber] [tinyint] NOT NULL,
[SecondNumber] [tinyint] NOT NULL,
[TimeInSecond] [int] NOT NULL,
[HourlyBucket] varchar(15)not null,
[DayTimeBucketGroupKey] int not null,
[DayTimeBucket] varchar(100) not null
CONSTRAINT [PK_DimTime] PRIMARY KEY CLUSTERED
(
[TimeKey] ASC
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/***** Create Stored procedure In Test_DW and Run SP To Fill Time Dimension with Values****/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[FillDimTime]
as
BEGIN
–Specify Total Number of Hours You need to fill in Time Dimension
DECLARE @Size INTEGER
–iF @Size=32 THEN This will Fill values Upto 32:59 hr in Time Dimension
Set @Size=23
DECLARE @hour INTEGER
DECLARE @minute INTEGER
DECLARE @second INTEGER
DECLARE @k INTEGER
DECLARE @TimeAltKey INTEGER
DECLARE @TimeInSeconds INTEGER
DECLARE @Time30 varchar(25)
DECLARE @Hour30 varchar(4)
DECLARE @Minute30 varchar(4)
DECLARE @Second30 varchar(4)
DECLARE @HourBucket varchar(15)
DECLARE @HourBucketGroupKey int
DECLARE @DayTimeBucket varchar(100)
DECLARE @DayTimeBucketGroupKey int
SET @hour = 0
SET @minute = 0
SET @second = 0
SET @k = 0
SET @TimeAltKey = 0
WHILE(@hour<= @Size )
BEGIN
if (@hour <10 )
begin
set @Hour30 = ‘0’ + cast( @hour as varchar(10))
end
else
begin
set @Hour30 = @hour
end
–Create Hour Bucket Value
set @HourBucket= @Hour30+’:00′ +’-‘ +@Hour30+’:59′
WHILE(@minute <= 59)
BEGIN
WHILE(@second <= 59)
BEGIN
set @TimeAltKey = @hour *10000 +@minute*100 +@second
set @TimeInSeconds =@hour * 3600 + @minute *60 +@second
If @minute <10
begin
set @Minute30 = ‘0’ + cast ( @minute as varchar(10) )
end
else
begin
set @Minute30 = @minute
end
if @second <10
begin
set @Second30 = ‘0’ + cast ( @second as varchar(10) )
end
else
begin
set @Second30 = @second
end
–Concatenate values for Time30
set @Time30 = @Hour30 +’:’+@Minute30 +’:’+@Second30
–DayTimeBucketGroupKey can be used in Sorting of DayTime Bucket In proper Order
SELECT @DayTimeBucketGroupKey =
CASE
WHEN (@TimeAltKey >= 00000 AND @TimeAltKey <= 25959) THEN 0
WHEN (@TimeAltKey >= 30000 AND @TimeAltKey <= 65959) THEN 1
WHEN (@TimeAltKey >= 70000 AND @TimeAltKey <= 85959) THEN 2
WHEN (@TimeAltKey >= 90000 AND @TimeAltKey <= 115959) THEN 3
WHEN (@TimeAltKey >= 120000 AND @TimeAltKey <= 135959)THEN 4
WHEN (@TimeAltKey >= 140000 AND @TimeAltKey <= 155959)THEN 5
WHEN (@TimeAltKey >= 50000 AND @TimeAltKey <= 175959) THEN 6
WHEN (@TimeAltKey >= 180000 AND @TimeAltKey <= 235959)THEN 7
WHEN (@TimeAltKey >= 240000) THEN 8
END
–print @DayTimeBucketGroupKey
— DayTimeBucket Time Divided in Specific Time Zone
— So Data can Be Grouped as per Bucket for Analyzing as per time of day
SELECT @DayTimeBucket =
CASE
WHEN (@TimeAltKey >= 00000 AND @TimeAltKey <= 25959) THEN ‘Late Night (00:00 AM To 02:59 AM)’
WHEN (@TimeAltKey >= 30000 AND @TimeAltKey <= 65959) THEN ‘Early Morning(03:00 AM To 6:59 AM)’
WHEN (@TimeAltKey >= 70000 AND @TimeAltKey <= 85959) THEN ‘AM Peak (7:00 AM To 8:59 AM)’
WHEN (@TimeAltKey >= 90000 AND @TimeAltKey <= 115959) THEN ‘Mid Morning (9:00 AM To 11:59 AM)’
WHEN (@TimeAltKey >= 120000 AND @TimeAltKey <= 135959) THEN ‘Lunch (12:00 PM To 13:59 PM)’
WHEN (@TimeAltKey >= 140000 AND @TimeAltKey <= 155959) THEN ‘Mid Afternoon (14:00 PM To 15:59 PM)’
WHEN (@TimeAltKey >= 50000 AND @TimeAltKey <= 175959) THEN ‘PM Peak (16:00 PM To 17:59 PM)’
WHEN (@TimeAltKey >= 180000 AND @TimeAltKey <= 235959) THEN ‘Evening (18:00 PM To 23:59 PM)’
WHEN (@TimeAltKey >= 240000) THEN ‘Previous Day Late Night (24:00 PM to ‘+cast( @Size as varchar(10)) +’:00 PM )’
END
— print @DayTimeBucket
INSERT into DimTime (TimeKey,TimeAltKey,[Time30] ,[Hour30] ,[MinuteNumber],[SecondNumber],[TimeInSecond],[HourlyBucket],DayTimeBucketGroupKey,DayTimeBucket)
VALUES (@k,@TimeAltKey ,@Time30 ,@hour ,@minute,@Second , @TimeInSeconds,@HourBucket,@DayTimeBucketGroupKey,@DayTimeBucket )
SET @second = @second + 1
SET @k = @k + 1
END
SET @minute = @minute + 1
SET @second = 0
END
SET @hour = @hour + 1
SET @minute =0
END
END
Go
Exec [FillDimTime]
go
select * from DimTime

Reference: http://www.codeproject.com/Tips/642912/Create-Populate-Time-Dimension-with-24-Hourplus-Va

Google Chrome issue with SSRS

SSRS reports are not supported for Google Chrome browser. (http://msdn.microsoft.com/en-us/library/ms251673.aspx ) .   Even firefox has issues while rendering ssrs report iFrames but this workaround seems working for firefix (http://bobp1339.blogspot.com/2010/08/ssrs-fix-for-firefoxchrome-report-size.html) .

The problem with google chrome is a known issue if we use report manager . The ideal solution is to use report viewer (http://localhost/reportserver ) instead of report manager (http://localhost/reports).

Report Manager

image

 

Report Viewer

image

 

Reference: http://bineeshthomas.wordpress.com/2011/09/16/google-chrome-issue-with-ssrs/

Power View–SQL Server 2012(code named Denali)

Power View which was previously named as Project Crecent is an SQL Server 2012 reporting service add-in for sharepoint 2010 and excel power pivot for sql server 2012.

Power View (RDLX) and Power Pivot (XLSX)  samples can be downloaded at
http://www.microsoft.com/download/en/details.aspx?id=26718#overview

ScreenShot_2012-04-13_13-35-00Power view is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. They can easily create and interact with views of data from data models based on PowerPivot workbooks published in a PowerPivot Gallery, or tabular models deployed to SSAS instances. Power View is a browser-based Silverlight application launched from SharePoint Server 2010 that enables users to present and share insights with others in their organization through interactive presentations.

 

 

HIGHLIGHTS OF POWER VIEW

  • Presentation Ready :  A fine look and file for presentation. A WYSWYG , so it works with real data and no need top preview it to see how it looks. There are reading view and full screen view. Also an interactive power view can be exported to PPT slide. Power view can also be published to share point , so that user can view and interact with them.
  • Data Model based : It is a thin web client which is downloaded through the browser from a sharepoint 2010 data model( a power pivot model workbook or a tabular model running on an ssas.
  • Visual Design experiences : There is no separtion of design time and run time. we can switch between views and change the perspective as simple as working with an excel sheet with its ribbon controls
  • Creating Data Visualizations : A wide variety of visualizations available like tables, matrices, charts, graphs, bubble charts etc., and we can switch between those very quickly and easily.
  • Highlighting and filtering data:  Different types of filters are available. A filter can be applied to a visualization or globaly. We can also highlight the data using the filters in the visualisations
  • Sort : Sort is basically applied for anything
  • Reports with multiple views : A report can have multiple views with different visualisations and different filters.
  • Performance : it fetches data only that is needed for a particular visualisation. This is beneficial for visualisation based from millions of records.

Reference:http://bineeshthomas.wordpress.com/2012/04/13/power-viewsql-server-2012code-named-denali/

SQL Server Reporting Services 2008 Tutorial in 5 Minutes

Some facts:

  • SQL Server Reporting Services were great in the past. SSRS 2008 are better.
  • MSDN is great. MSDN has lots of details. Sometimes it’s too much if you are under pressure.
  • Google usually gets me the information I want faster. Especially when I need to get something done quickly.

All in all this post is for those of you that need to learn how to use SQL Server Reporting Services fast or need to solve the above mentioned problem fast, without knowing too much about the details. I read a lot of and like these introductory posts and so now I’m writing another one, here goes.

I assume you have SQL Server Reporting Services already setup (by your grand one-and-only unmissable sys-admins or by yourself, but in the latter case you probably are not the target audience for this) and you have access to the all required tools. Talking about tools, there are three you should know of:

– RSConfigTool – (Start menu > SQL Server 2008 > Configuration > Reporting Services Configuration): What do I need it for: Troubleshooting, Finding Urls, Configuration Report Server. Having problems trying to connect. See Additional Tip #1.

– Report Manager – (http://<yourserver>/reports): What do I need it for: Managing your reports (you won’t actually see anything on a clean install yet)

– Business Intelligence Studio – (Start menu > SQL Server 2008 > SQL Server Business Intelligence Development Studio): What do I need it for: Creating and editing your reports

and of course you will need SQL Server Management Studio for managing the data the report is based upon.

(side note: This tutorial is based on the 2008 version, can’t say how many differences there are between this and the older versions.)

1. Create your views

There are ways to aggregate your data in the report itself, but it’s easier and cleaner to use SQL views to provide the data for the report and use Reporting Services only for the report. So go to SSMS and create your view.

2. Create a new BIDS (Business Intelligence Development Studio) solution based on the “Report Server Project” Template. Add a new report by clicking on the Report node in the Solution Explorer. Next we need to connect to a data source. Give it a name, click edit and connect to your database.

datasource

3. Click next and the Query Designer appears. Here you enter the T-SQL statement for your data. Ideally it’s just a “SELECT * FROM yourView” (or select each column distinctly). You can test it using ‘Run Query’. You can also choose to use data from a stored procedure if you like, just select the appropriate option.

reportdesigner1

4. Design your report. You can add elements to the report surface, by right clicking on the center surface and select them from the insert menu. The most important are Header, Footer and Table. In the Header and Footer you can add Textboxes with text or by right-clicking on them and choosing expressoin you can choose some variable input. For example page number, report name etc. In the table you can drag columns from the “Report Data” (on the left showing the data source you just added). When you are finished, you might have something like this. Click preview to get a glimpse at what your boss might get.

reportdesigner2

reportpreview

5. Now let’s deploy it to the report server. First we must set the report server url. Right-click on the solution in the Solution Explorer and click on Properties. You will see the following dialog. Change the TargetReportFolder value to something you wish and provide the TargetServerURL as http://yourserver/reportserver (not just /report!). Press save. Now right-click on the solution again and select Deploy.

reportproperties

6. When it finishes open the url http://yourserver/report (not /reportserver!). You will see something like this. Go ahead and click on Test.Reports (or whatever you specified as TargetReportFolder) and then click on your test report. You will be greeted with the web version of your report.

webreport

7. Now your Boss will not want to go the web page to get that report, so let’s look at sending it to him per email automatically. First we need to change the security settings for connecting to the data source. In the report page, click on Properties > Data Sources. Change the “Connect using” to “Credentials stored securely in the report server” and specify an account that has access to the data. Also check “Use as Windows credentials…”.

security

8. Now you can go to Subscriptions. Click on email. Don’t have email as an option? See Additional Tip #2. Enter the recipient information as you wish and select a schedule for the sending. If you press save and encounter an error that ‘SQL Server Agent is not running’ either figure it out yourself what the problem could be or read Additional Tip #3 :). Once you have finished, lean back and let SSRS make your boss happy and in turn make you happy.

So we have seen the absolute basics of report creation. A quick and dirty introduction. This should cover your first encounters with SSRS. Hope it helps someone!

Stop reading here unless you are Troubleshooting…

Additional Tip #1:

I try connecting with SSMS (Management Studio) and it won’t connect. Maybe you renamed the server? Open RSConfigTool, go to Database > Change Database > Choose an existing report server database. Just choose the existing one, and let it run through it’s config. That should repair everything and you should be able to connect using SSMS again.

Additional Tip #2:

You don’t have email in the options for delivery in the subscriptions dialog? Probably it’s not configured. Thankfully that is very easy. You start the RSConfigTool. Connect and go to the Email tab. There you just specify a sender address and the SMTP server you would like to use. Click apply and voila. The server will automatically restart and the now the email option appears.

Additional Tip #3:

Saving the subscription results in an error message saying SQL Server Agent is not running. This is a DB or sysadmin issue, but just fire up SQL Server Configuration Manager (in the Start Menu under SQL Server 2008 > Configuration) then click on SQL Server Services, right-click on the SQL Server Agent > Properties. Go to the service tab, change Start Mode to Automatic and press OK. On the next reboot it will start automatically, but you still need to start it once for now. Right-click on the ‘SQL Server Agent’ and press Start.

 

Reference : http://alexduggleby.com/2008/07/18/sql-server-reporting-services-2008-tutorial-in-5-minutes-or-how-to-send-my-boss-a-report-from-a-view-quickly/

Using SQL Server 2012 T-SQL New Features

Introduction

SQL Server 2012 “Denali” is the next major release of Microsoft database server. There are some new features that are added to T-SQL to make common tasks much easier. I will show how to use some of the new features in this article.

Sequence

Generating a sequence number, a.k.a. auto number, is a common task in an enterprise application. For a single table, you can specify identity field. But, if you want to have database wide sequential number, then you must devise something by yourself before SQL Server 2012. One solution to this problem is to create a table that has a numeric field can be used to store sequential number, then use SQL to increase it every time used one. In SQL Server 2012, we have a new solution – use Sequence.

Create Sequence

To create a Sequence in SQL Server 2012 is very simple. You can create it with SQL Server Management Studio or T-SQL.

  1. Create Sequence with SQL Server Management Studio
    In Object Explorer window of SQL Server Management Studio, there is a Sequences node under Database -> [Database Name] -> Programmability. You can right click on it to bring up context menu, and then choose NewSequence… to open the New Sequence window. In New Sequence window, you can define the new Sequence, like Sequence Name, Sequence schema, Data type, Precision, Start value, Increment by, etc. After entering all the required information, click OK to save it. The new Sequence will show up in Sequences node.
  2. Create Sequence with T-SQL
    The following T-SQL script is used to create a new Sequence:

     Collapse | Copy Code
    CREATE SEQUENCE DemoSequence
    START WITH 1
    INCREMENT BY 1;
Use Sequence

The new NEXT VALUE FOR T-SQL keyword is used to get the next sequential number from a Sequence.

 Collapse | Copy Code
SELECT VALUE FOR DemoSequence

One thing I want to mention in here is Sequence doesn’t support transaction, if you run this script:

 Collapse | Copy Code
BEGIN TRAN
SELECT NEXT VALUE FOR dbo.DemoSequence
ROLLBACK TRAN

You can see even the transaction is rolled back at the end. The NEXT VALUE FOR will still return the next sequential number. This behavior is consistent with identity field.

Page Data

A common situation for displaying page is how to display large amount of data in DataGrid. Earlier, the programmer usually used the paging feature of DataGrid to handle this situation. Therefore, by choosing a different page number, different set of data are displayed on the screen. However, how to retrieve data from database is multiplicity. A developer could:

  1. Retrieve all data from database, and then let DataGrid to only display the current page data.
  2. Retrieve the current page data from database by using temp table.
  3. Retrieve the current page data from database by using ROW_NUMBER() function.
    The SQL Server 2012 provided a new way to retrieve current page data from database.
 Collapse | Copy Code
SELECT *
FROM Customers
ORDER BY CustomerID
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY;

The OFFSET keyword and FETCH NEXT keyword allow the developer to only retrieve certain range data from database. If you compare this script with ROW_NUMBER() function introduced in SQL Server 2008, you can see this script is shorter and more intuitive.

 Collapse | Copy Code
SELECT *
FROM (
SELECT ROW_NUMBER() OVER(ORDER BY CustomerID) AS sequencenumber, *
FROM Customers) AS TempTable
WHERE sequencenumber > 10 and sequencenumber <= 20

Exception Handling

SQL Server 2005 introduced TRY CATCH block to handle exception in T-SQL. The TRY CATCH block is similar to whatever in C# language except you need always raise a new exception after catching it. There is no way to simply re-throw it.

A sample of T-SQL script with exception handling in SQL Server 2005:

 Collapse | Copy Code
BEGIN TRY
	BEGIN TRANSACTION – Start the transaction

	-- Delete the Customer
	DELETE FROM Customers
	WHERE EmployeeID = ‘CACTU’

	-- Commit the change
	COMMIT TRANSACTION
END TRY
BEGIN CATCH
	-- There is an error
	IF @@TRANCOUNT > 0
		ROLLBACK TRANSACTION

	-- Raise an error with the details of the exception
	DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
	SELECT @ErrMsg = ERROR_MESSAGE(),
		@ErrSeverity = ERROR_SEVERITY()

	RAISERROR(@ErrMsg, @ErrSeverity, 1)
END CATCH

In SQL Server 2012, by using Throw keyword, the above script will be changed to this:

 Collapse | Copy Code
BEGIN TRY
	BEGIN TRANSACTION -- Start the transaction

	-- Delete the Customer
	DELETE FROM Customers
	WHERE EmployeeID = ‘CACTU’

	-- Commit the change
	COMMIT TRANSACTION
END TRY
BEGIN CATCH
	-- There is an error
	ROLLBACK TRANSACTION

	-- Re throw the exception
	THROW
END CATCH

Also, you can use Throw to replace RAISERROR function:

 Collapse | Copy Code
THROW 51000, ‘The record does not exist.’, 1;

Enhanced EXECUTE keyword

The EXECUTE keyword is used to execute a command string. The previous version SQL Server only has WITH RECOMPILE option to force new plan to be re-compiled. The SQL Server 2012 dramatically improved this part. The option part is like this right now.

 Collapse | Copy Code
[ WITH <execute_option> [ ,…n ] ]

<execute_option>::=
{
	RECOMPILE
	| { RESULT SETS UNDEFINED }
	| { RESULT SETS NONE }
	| { RESULT SETS ( <result_sets_definition> [,…n] ) }
}

<result_sets_definition> ::=
{
	(
		{ column_name
		  data_type
		[ COLLATE collation_name ]
		[ NULL | NOT NULL ] }
		[,…n ]
	)
	| AS OBJECT
		[ db_name . [ schema_name ] . | schema_name . ]
		{table_name | view_name | table_valued_function_name }
	| AS TYPE [ schema_name.]table_type_name
	| AS FOR XML
}

The way to use the new added options is like this:

 Collapse | Copy Code
EXEC CustOrderDetail ‘2’
WITH RESULT SETS
(
	(
	ProductName1 varchar(100),
	Unitprice1 varchar(100),
	Quantity1 varchar(100),
	Discount1 varchar(100),
	ExtendedPrice1 varchar(100)
	)
);

Get Metadata

Application sometimes needs more insight of the SQL script result set. In the past, you needed to write a complicated script to query system tables or views, e.g. sys.objects, to get all the information. In SQL Server 2012, the newsystem stored procedure sp_describe_first_set makes the work trivial.

 Collapse | Copy Code
sp_describ_first_result_set @tsql = N’SELECT * FROM customers’

Summary

There are more T-SQL new features in the upcoming SQL Server 2012. Majority of them are designed to improve development efficiency and reduce development effort.

 

Reference : http://www.codeproject.com/Articles/265760/Using-SQL-Server-2011-T-SQL-New-Features

Top 5 SQL Server 2012 Features

SQL Server 2012 quite handily marks the most excited I’ve been about a SQL Server release since waiting over five years for SQL Server 2005. I’ve written several blog posts about individual SQL Server 2012 features, but I have yet to look back and reflect on what features I’m most excited about. Here are my top five, in no particular order:

AlwaysOn

Quite simply, SQL Server 2012 is going to revolutionize the way we think about availability and recovery. Today, when we think about these concepts, we consider an individual database (log shipping, replication, mirroring) or an entire SQL Server instance (failover clustering). In SQL Server 2012, in addition to Failover Clustering enhancements, we’ll be able to treat a group of databases as an entity – a much more common requirement with today’s complex, multi-database applications. This entity is called an Availability Group.

Availability Groups will allow us to fail over that group of databases as a single unit. Since a single instance of SQL Server will be able to host multiple availability groups, in the event of a failure, we’ll have the flexibility to fail over one availability group to instance B, another availability group to instance C, and so on. This means that we don’t have to have a standby server that is capable of handling the full load of our primary server – we can distribute those workloads across multiple, lower-powered servers, with the same application transparency we enjoy with traditional mirroring today.

We will also be able to have more than a single replica for each Availability Group, so we can have one or more local copies (protecting us from localized failures) as well as one or more remote copies (protecting us from entire site failures). Finally, read-only secondaries will allow us to run certain backup operations and all reporting activity against the mirrored copy, taking significant load off the primary server (without the maintenance and overhead of snapshots). These are called Active Secondaries in the official literature; however, I have a gripe with this term – I think it is slightly misleading, since it can imply both read and write activity.

And for the first time, we will be able to perform a cluster or mirror failover across subnets in a supported manner, meaning there will be an easy way to set up resiliency between, say, our data centers in Virginia and North Carolina. In addition, a new flexible failover policy allows us to dictate exactly what causes a failover, giving us more control over the tolerance of an instance or availability group to unfavorable conditions within certain server components.

These new AlwaysOn features allow us much more power and flexibility in our efforts toward both high availability and disaster recovery, whether we are using Availability Groups or standard Failover Clustering. Please note that AlwaysOn is not a technology in and of itself – it is merely the branding for the high availability and disaster recovery features in SQL Server

Contained Databases

A common DBA task is to migrate a database to a different instance – to a newer, more powerful server; to a different phase in the application lifecycle (dev -> test -> QA -> staging -> production); or as part of various failover conditions. A problem that has plagued us for a long time in this process is that a database is never really all that independent – there is a lot of scaffolding that has to come along for the ride to ensure that the database continues to function as a dependable component in the application. This scaffolding includes logins, SQL Server Agent jobs, linked servers, custom messages stored in sys.messages, and even differences between server and database collations (which in turn can cause problems when working with temporary objects).

In SQL Server 2012, we have a new feature called Contained Databases, which intends to reduce or eliminate the dependencies that a database has on the SQL Server instance, making it easier to migrate a database to a new instance with less of the work involved in reproducing and validating these dependencies.

One of the issues solved in this release involves the ability to define a user with a password at the database level (a “contained database user”). By authenticating at the database level, you can move the database to a new server, and applications can change their connection string without having to create new SQL Server logins – a common problem we see today is orphaned users and/or mismatched security identifiers (SIDs).

This release also solves an issue involving databases with a different collation than the server’s – today, if you use any objects in tempdb, you may find that collation conflicts are a significant part of your troubleshooting efforts, especially if you move your database between instances. In SQL Server 2012, tempdb will automatically create objects using the collation of the Contained Database, rather than the server default, making it easier to rely on your code regardless of the server collation (of course this still does not resolve issues where you try to join #temp tables generated from multiple databases with different collations).

You could even say that the new THROW()Transact-SQL command can help make these transitions easier, as we will be able to raise custom errors without needing to define these messages first in sys.messages. But, to be clear, this is not an explicit feature of Contained Databases.

There are DMVs and events that will help identify queries and objects that are not “contained” and that will present a potential risk should the database be moved to a new instance. The only containment option in SQL Server 2012, however, is PARTIAL – because containment is only observed, not enforced.  In future releases, we will see this model extended to help deal with SQL Server Agent jobs and linked servers, as well as actual enforcement.

While it’s not complete just yet, Contained Databases give us a fantastic first step towards database autonomy. In my previous job as a production DBA and architect, this functionality could have saved me dozens and dozens of hours in deployment preparation and subsequent troubleshooting.

ColumnStore Indexes

In SQL Server 2012, the relational engine gets to take advantage of the column-based index first seen through the VertiPaq acquisition a few years ago. What a ColumnStore index does is essentially turn a traditional index on its side. Think about a covering index that includes multiple columns, but instead of storing all the different columns in a single row on a single page, split it up and store each column on its own set of pages.

While this is a vast over-simplification at what is happening behind the scenes, the performance benefits can be astounding. With certain types of queries, we experience drastically reduced I/O (even though it sounds like the index will span more pages, not less). Since each column is stored in its own set of pages, this makes compression much more effective – since you are much more likely to have like values on the same page. Think about ordered integers and how effective page compression will be if every value on the first page is “1.” Also think about how effective an aggregate operation will be against a column that is on far fewer pages, without having to read.

You might ask, how does this really differ from a bunch of indexes on individual columns? There isn’t that much difference structurally, except that a ColumnStore index is typically defined on most of the columns in a table. There is a specific star join optimization that has been implemented which makes a query against multiple columns (including aggregates against some or all) much more efficient than the single index that would have been used in conjunction with a bunch of lookups in a traditional OLTP query. This optimization does not work for all queries, and in fact you will see that the ColumnStore index can hinder certain types of queries, such as outer joins and unions. But for certain workloads I have seen reports of performance improvements in excess of 1000%.

Now, nothing is free. In this version, ColumnStore indexes have a major limitation: they are read only. This means that once a ColumnStore index has been created on a table, you can no longer perform any DML operations against that table. So while it seems this makes the very feature far less useful in an OLTP scenario, this is not the case. With native or custom partitioning, for example, you could use a sliding window scenario, having a static set of partitions in a table with a ColumnStore index, and load current data (say, for today only) into a separate table with no ColumnStore index. Once the current day rolls into the next, you can continue loading the new day’s data into yet another table, create the ColumnStore index on yesterday’s (now unchanging) daily table, and then switch it into the partitioned table. You could make this relatively seamless to end users by creating a view that unions the partitioned table with the “current” table. You can also consider loading daily data into a non-ColumnStore table and, at the end of each day, drop the ColumnStore index from the reporting table, insert all of yesterday’s data, and rebuild the ColumnStore index. Whether this is worthwhile or fits your usage scenario depends on various factors, but the performance benefits that are realized on certain queries and workloads certainly make this something worth investigating.

For more information on ColumnStore indexes, see the ColumnStore Performance Tuning Wiki and Eric Hanson’s whitepaper entitled Columnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0.

T-SQL Enhancements

SQL Server 2012 brings many new features to the T-SQL language. Some were added merely to ease the transition to SQL Server from other languages and platforms. Others were added to provide very powerful, new ways to solve cumbersome problems. A few examples:

  • ·         TRY_CONVERT()

    I’ve been bitten many times by bad data in a column using the wrong data type. ISNUMERIC(), for example, is not always reliable; values that return true are not convertible to all numeric types. Today I might try to perform something like this:

    SELECT CONVERT(INT, column) … WHERE ISNUMERIC(column) = 1;

    However, this will fail for values like ‘e’ which are considered numeric but cannot be converted to an integer. TRY_CONVERT() allows you to ignore invalid conversions and return NULL for those values instead of returning an error for the entire query.

  • ·         OFFSET / FETCH

    Many web applications use paging to show 10 or 50 rows per page and allow the user to scroll through each page of results rather than download the entire set. MySQL has had the non-standard LIMIT clause for some time now, and SQL Server users have longed for similarly simple syntax. ORDER BY … OFFSET / FETCH syntax, which is standards-compliant, has been added to the SQL Server 2012. While it doesn’t provide significant performance improvements over the tedious CTE solutions we use today, it certainly makes the code easier to write, as I demonstrated in a blog post last November.

  • ·         FORMAT()

    Using CLR under the covers, we will now finally have relative parity with the .format() function we are used to using in .NET languages like C#. This means no more memorizing cryptic style codes like 101 and 103 for converting datetime values to localized presentation formats, or using all kinds of messy string manipulation to present numeric values with dollar signs and thousands separators. In August I wrote a lengthy blog post about FORMAT() with many examples.

  • ·         Window Function Enhancements

    If you’ve heard of Itzik Ben-Gan, you’re almost certainly aware of what a big fan of window functions he is. I can tell you from first-hand experience that he is absolutely ecstatic about SQL Server 2012’s addition of window offset and distribution functions, as well as enhanced windowed aggregates (including window framing). You can see an intro to these features inhis recent article on sqlmag.com, and watch for future articles, as well as an upcoming book devoted entirely to the topic.

These are just a few of the T-SQL enhancements in SQL Server 2012; for a more complete list, see the Programmability Enhancements page in Books Online.

ShowPlan Enhancements

This isn’t an advertised feature, per se, but it’s something I’m excited about nonetheless. From the ShowPlan XML we can now derive much more information about our query plans, making it easier to track down various nagging performance issues.

Of course, you’ll see new operators such as Window Spools (for the new windowing functions), Offsets/Sequences and ColumnStore Index Scans (including whether execution mode was row or batch). There is also information about hint changes first introduced in SQL Server 2008 R2 SP1 (FORCESCAN, and FORCESEEK with columns).

There are also new warnings at the operator level, such as spills to tempdb (from sort operations or aggregates), and more details about implicit conversion warnings (for example, whether a conversion affects cardinality or seek choice). At the statement level, we can see information about things like whether the plan was retrieved from cache. And at the plan level, we can see many new properties, including the reason a plan was not parallel (for example, MAXDOP), detailed memory grant information (including, in some cases, how much memory was requested and how long the grant took), and deeper information about parallel queries (for example, thread usage across NUMA nodes).

To get an idea of all the information that is coming to a ShowPlan near you in SQL Server 2012, and even to get some insight into future features that have yet to be announced, perform a file comparison between the latest XSD and the 2008/2008 R2 version. You can find the SQL Server 2012 XSD at the following location:

 

    C:\Program Files (x86)\

            Microsoft SQL Server\110\Tools\Binn\schemas\

            sqlserver\20047\showplan\showplanxml.xsd

(For SQL Server 2008/R2, change 110 to 100.)

Conclusion

There are at least a dozen other features in SQL Server 2012 that I highly anticipate simplifying the lives of developers, database administrators and other stakeholders alike. Currently, I feel a little bit of sympathy for anyone I come across still running SQL Server 2000. Sometime next year, I’m going to feel a LOT of sympathy for anyone NOT running SQL Server 2012. If you weren’t already excited about this release, I hope my perspective on a few of its features has helped.

Reference : http://blogs.technet.com/b/dataplatforminsider/archive/2011/11/01/my-top-5-sql-server-2012-features-by-aaron-bertrand-guest-blogger.aspx