Thursday, December 27, 2012

Presenting SSAS Tabular to MSBISF Group on January 9, 2013


On Wednesday January 9, 2013 I will be presenting SSAS Tabular and Self Service BI - The DynamDAX Duo to the MSBISF user group at 6 PM EST at Nova Southeastern University.



You can find out more information on the event here.

I am very excited for this presentation and hope you can attend.

This training will be very hands on and will walk through how to create a SQL Server Analysis Services Tabular model from the ground up.

I will then show you how it shines by exposing it in Excel, and will even demo Power View to make the model really shine.

Tuesday, December 4, 2012

Presenting as Part of Pragmatic Works Free Training on the T's

On Thursday December 20, 2012 at 11 am est I will be presenting SSAS Tabular and Self Service BI - The DynamDAX Duo.

This webinar is being put on the by the awesome folks at Pragmatic Works



and is part of their series of free training on the T's (Tuesdays and Thursdays). You can find out more information on the event here and register today!

I am very excited and honored to be doing this webinar and hope you can attend.  This training will be very hands on and will walk through how to create an Analysis Services Tabular model from the ground up. I will then show you how it shines by exposing it in Excel, and will even demo Power View to make the model really shine.

Thursday, November 29, 2012

Fact Table Range Groups Created Easily in SSAS Tabular

This is one very cool feature that I wanted to share in my SQL Saturday #168 session on SSAS Tabular but ran out of time. Often in a data warehouse project you will receive a request to group certain values in a fact table into range groups.

For example: Sales Amounts into categories of "Small", "Medium", or "Large" and the corresponding range values for price will be 0-$50, $51-100, $101-$1,000,000.

If anyone has ever built a fact table in a data warehouse from the ground up and those are the concrete ranges, it is business as usual. The ETL can account for a dimension called Sales_Category and a foreign key to the dimension in the fact table is created etc.

With Analysis Services Tabular you can import a range table right into the SSAS model and create a DAX formula to accomplish this simply. I know you are probably asking yourself well I still need a table created in the data warehouse with the range values to import so it is still going to take some time to get accomplished right?  The answer is no you don't need to (see Disclaimer below).  You can create a range table in Excel or a flat file and import it right into the Analysis Services Tabular model.  Alberto Ferrari's post on banding with PowerPivot helped me do this and it works elegantly.  If you don't read Alberto's blog posts regularly, you should ;-)

Disclaimer: I am not saying this is a best practice or not; but can be accomplished quickly to accommodate a need and works elegantly. Personally I would do this to accommodate a quick need then incorporate the range table into the data warehouse for consistency. In my finest Forrest Gump voice "that is all I have to say bout that" and you can decide what is best for your situation.

Here is how you can do it...

Step 1: Click on Model>Import From Data Source>Excel


Step 2: Choose the Excel file you created for Sales Categories and enter credentials when prompted. Enter a name for the "Friendly Name" of your table.  I chose RangeSalesCategory. Click Finish.



Step 3: Finish the import into the model.


Step 4: Click on the Fact Table you would like to have a range incorporated on and create a new calculated column with a simple but very powerful DAX command called CALCULATE:


= CALCULATE(VALUES (RangeSalesCategory[Sales_Category]),
    FILTER (RangeSalesCategory,
                  FactInternetSales[SalesAmount] >= RangeSalesCategory[Min_Sales_Amount]
                  && FactInternetSales[SalesAmount] < RangeSalesCategory[Max_Sales_Amount]
                 )
     )





As you can see the Sales Category range is populated with a very minimal effort.  All that is left to do is deploy and process the cube. Make sure you leave this column unhidden along with the Primary Key. Doing this will create a degenerate dimension you can slice your measures by without a week or so effort!


Wednesday, November 14, 2012

Speaking This Saturday in Tampa at SQL Saturday 168 BI Edition!

Just a quick blog post to let everyone know that I am speaking this saturday November 17, 2012 at SQL Saturday - Tampa BI Edition 2012.  You can find out more info on times of sessions and location of the event here.

I will be doing a session on SSAS Tabular and Self Service BI. I will walk everyone through how to create an SSAS Tabular model from scratch, get into some hierarchies, KPIs, then show the performance of the xVelocity engine on a large data set with calculations.



If you are in the Tampa / St. Pete area, you should take advantage of this free training event with great sessions from national and local experts to learn the latest with the Microsoft Business Intelligence Stack. Hope to see you all there!

Monday, November 5, 2012

SQL Saturday 174 - Follow Up Oracle Transactional Replication Answers

Thank you to everyone that attended my session on Conquering Reporting by Scaling Out SQL Server. It was a packed room and thank you especially for the high marks on the evaluations. It was my first time presenting in the first session slot and was at maximum caffeine slope on the curve during the session ;-)

There were two questions related to Oracle and Transactional Replication.

Questions:

1. Are the tables locked in Oracle for Publishing as they are in SQL Server when using an Oracle Publisher?

2. Can you have an Oracle database as the subscriber?

Great questions. I tried to setup an Oracle publisher once many years ago and the Oracle DBA team did not want to do it because it required a bunch of extra objects and instead wanted to use a third party tool called Data Mirror. That doesn't answer the questions though so here is what I found out.

It is important to note that Oracle Publishers are being deprecated from SQL Server in a future release. You can create an Oracle Publisher from SQL 2005 to SQL 2012 using Oracle 9i and above. SQL Server wants folks to use SQL Server Integration Services (SSIS) and Change Data Capture (CDC) as the methods going forward from what I've read.

Answers:

1. No they are not locked.  DDL changes can be done to Oracle Publisher tables at will, but they will not show up at the subscriber unless the publication is reinitialized. Further, Oracle Publishers are setup different with triggers created on Oracle tables then other tables that capture the changes that log the changes and get pushed to the distributor.  

References:
http://msdn.microsoft.com/en-us/library/ms152499.aspx  (Info on Oracle & Transaction Replication)


2. Yes you can have an Oracle database as a subscriber. You cannot have pull subscriptions you can only have push subscriptions.

References:


Great questions, and sorry for the delay in posting the answers.

Thursday, October 11, 2012

SharePoint 2013 Installation and Configuration for Power View


I have been working on this post for a long time.  I actually referred to this as the "long one"...then as time went on it grew into being referred to as "Long Duk Dong" from the 80s movie Sixteen Candles; for no particular reason other than I liked saying it.

How long? A few months actually scattered here and there between projects, and when I ran into an error or snag (and there were a lot of them) I usually shelved it for something else.  This post will hopefully make it easy for you (but Long) to build a SharePoint 2013 Server and configure Power View on it.  There are a lot of posts out there for SharePoint 2013 configuration for BI, but I needed several of them put together to get this accomplished. So that is why I outlined it below for you in one nice blog post so you don't have to.

I downloaded a trial of SharePoint 2013 because I like to stay bleeding edge and work with all the latest technologies. SharePoint is now an integral the part of the Microsoft Business Intelligence Stack and the technology I know the least about so why not use the latest and greatest right?  I know how to use SharePoint and place documents out on a SharePoint site etc., but to configure it on a server as a farm install for Business Intelligence is a whole different ball game.  This post is for those who are novices with installing and configuring SharePoint but want to have Power View installed centrally on a server.  I also used Windows Server 2012 RC with the GUI as the operating system via VMWare VM Workstation 8.0 for my SharePoint 2013 server.

Prerequisite steps that I performed were:
  1. Created a Virtual Machine (VM) that was a Domain Controller. It's simpler than you think.
  2. Created a VM for the Sharepoint 2013 server.
  3. Joined the SharePoint 2013 VM to the domain.
  4. Understand how to deploy a tabular model to an Analysis Server 2012 machine because you will need to have an Analysis Services Tabular database for Power View to use as a source or have a PowerPivot workbook.  My blog post here can be used to help with SSAS Tabular.
I should note that you don't have to use VMWare or Windows Server 2012.  Any machine virtual or physical will do and Windows Server 2008 R2 is fine as long as you have a domain controller setup and a server for SharePoint 2013. 

Here is the list of steps that include things I downloaded, performed, as well as articles I used for assistance.  You know you just said to yourself, 23 steps that is long (come on say it - Long Duk Dong).
  1. Download SharePoint 2013.
  2. Download SQL Server 2012 SP1 CTP 3 (CTP4 is out now, but CTP3 is what I used and know works) ENU\SQLServer2012SP1-CTP3-ENU.iso
  3. Install SQL Server 2012 SP1 CTP3 DB Engine, Analysis Services (Pick Tabular Mode), Management Tools, plus SQL Server Data Tools as described from Installing the BI Features of Sharepoint 2013 Preview. (I did not install anything from SQL Server 2008 that is mentioned in this article). 
  4. Download the Adventure Works DW data file and AdventureWorks Tabular Model SQL Server 2012 here
  5. Open the Tabular model with SQL Server Data Tools you installed in step 3, and Publish it to the Analysis Services server.
  6. Installed Sharepoint 2013 Prerequisites as described from Installing the BI Features of Sharepoint 2013 Preview.  After the restart I had to initialize the Prerequisites again. Once Finished you need to restart the machine again.
  7. Install Sharepoint 2013 as described from Installing the BI Features of Sharepoint 2013 Preview.  I followed the screen shots exactly.
  8. Install SQL Server 2012 SP1 CTP3 as described from Installing the BI Features of Sharepoint 2013 Preview.  I followed the screen shots exactly for adding the Reporting Services Services to the default instance in step 3 above.
  9. Install SQL Server 2012 SP1 CTP3 as described from Installing the BI Features of Sharepoint 2013 Preview.  I followed the screen shots exactly for adding SQL Server PowerPivot for Sharepoint.
  10. Install PowerPivot for Sharepoint 2013 Configuration as described from Installing the BI Features of Sharepoint 2013 Preview.  I followed the screen shots exactly.
  11. Add service applications and the many configurations via the Sharepoint 2013 interface as described from Installing the BI Features of Sharepoint 2013 Preview.  I followed the screen shots exactly.  Make sure under "Provision Subscriptions and Alerts" you download the script and execute it on the SQL Server.
  12. After all the steps have been completed in  Installing the BI Features of Sharepoint 2013 Preview, go to http://[Servername]:2013 in Internet Explorer. This is Central Administration.  If you followed to this point it has all been pretty straightforward and you're all setup with everything installed for Power View (and Performance Point) to work.  For SharePoint folks it may seem elementary to do the next few steps but like I stated early in this post, I am a SharePoint newbie so here are the final steps in detail I performed.
  13. Under Application Management, click on Create site collections, give your site a name and be sure to select the PowerPivot Site template.  I used [Domain]\Administrator accounts for Primary and Secondary Site Collection Administrators.  I use [Domain]\Administrator accounts everywhere for the purposes of setting up a lab\demo\play environment.  This is NOT a best practice and there are plenty of articles out there describing what groups, accounts to create for SharePoint setup. I recommend you follow them for production etc. environments. 
  14. Navigate to the site you just created.  Mine is http://[Servername]/sites/Powerpivot.  Click on Documents in the left navigation>then library in the top menu>library settings>Advanced Settings>click "Yes" to allow management of content types>Scoll down and hit "Ok".
  15. You should still be in the library settings location>click "Add from existing site content types"
  16. Add BI Semantic Model Connection. I added Report Builder Model and Report but you don't have to>hit OK.
  17. You should be back at http://[Servername]/sites/Powerpivot now.
  18. In the left NAV click Documents> top NAV click Files>click "New Document"> BI Semantic Model Connection.  YOUR ALMOST THERE STICK WITH IT! :-)
  19. Create a file name for the connection, specify the [Servername], and the Database>Click OK.
  20. Under Documents you should see your new connection.  Click the three dots>you should see "EDIT SHARE FOLLOW" click the next three dots>click Create Power View Report!                           
  21. If the next screen  prompts you to install Silverlight go ahead and install it.
  22. If it asks you to restart your browser like mine did feel free to curse at it. It may make you feel better and worked for me, then give in and restart it.
  23. Navigate to http://[Servername]/sites/Powerpivot>Click the three dots>you should see "EDIT SHARE FOLLOW" click the next three dots>click Create Power View Report. You are done and hope this helped out because when I journeyed down this road it wasn't easy! If you want help with creating Power View reports check this out. Dragging and clicking items from the field list on the right will get you familiar with it quickly. I created the report below in under 5 minutes.

All the best and hope this helps out those folks who struggled with this.  

Monday, September 24, 2012

Selected to Present at SQL Saturday #174 Charlotte

Just a short post to announce that I have been selected to present Conquering Reporting by Scaling out SQL Server on October 27, 2012 at SQL Saturday #174 in Charlotte North Carolina.  I am very honored to have been selected, and excited as this will be my first time presenting outside of Florida.


This will also be my first time officially visiting Charlotte and not just connecting to New York, Boston, or Philadelphia via US Airways.  Hopefully some nice fall weather as well!     

Wednesday, August 8, 2012

SSAS Tabular & DAX intro for T-SQL Folks

This post is intended to introduce a simple SSAS Tabular project and Data Analysis Expressions (DAX) to those who know Transact SQL (T-SQL) well.

This post assumes you have installed SQL Server 2012 DB Engine and SSAS Tabular on your machine.  Please use a sandbox development machine to do this.

I created two simple OLTP tables that you can use.  Usually a separate Product table and even a Date table with hierarchies should be used but for this illustration we are going to keep it simple.

After you download the script and run it on a test database on your sandbox development server you can create an SSAS Tabular Model.






1. Click>Start>All Programs>SQL Server 2012>SQL Server Data Tools (SSDT)

2. In SSDT, click File>New>Project and the following window should appear:


3. Make sure to select Analysis Services Tabular Project Under Business Intelligence and select a location and name that works for you.  I picked the default location and used the name DAXForTSQLFolks for the project.

4. The project will create an empty model.  Navigate to the menu and click Model>Import From Data Source>Microsoft SQL Server.

5. Enter in the credentials for the SQL Server and database that you created the tables in from above, and a server friendly name.  I use the name of the server with a prefix of DEV -, QA -, UAT - , PROD - in case I ever forget what I'm working with for friendly names.

6. Enter the username and password to connect to the SQL Server.

7. Select the two tables Sales and Salesperson to import and click Finish. You should see the tabular model with data.  Looks similar to Powerpivot doesn't it?



8.  Lets create one measure in the SSAS Tabular model.

9. Click on the sum button (For Measures) with the SaleID column still highlighted as above and select count.  You will notice it places a measure in the grid below the SaleID column as "Count of SaleID:=COUNTA([SaleID])".

10. You've just created a measure that is basically the number of sales.

11. You are now ready to deploy the SSAS Tabular database to your sandbox development server.  In Solution Explorer (usually on the right side of SSDT) find the project and right click it to display properties.

12. Click on deployment and make sure the server is set correctly to deploy the tabular model.  I've changed the Cube Name from "Model" to "LearnDAXCube".  All other defaults are fine for this exercise.

13. Click Build and select Deploy "DAXForTSQLFolks" or however you named the project.

14.  Open SQL Server Management Studio (SSMS) and connect to Analysis Services and your server name.

15.  You can browse the cube by right clicking the "LearnDAXCube" in the Object Explorer (usually on the left side of SSMS) and drag and drop "Count of SaleID" and the associated dimensions to get familiar with slicing and dicing the data.

You are now ready to write some DAX.

Open SSMS and a new MDX query window.  Yes this is confusing. There is no DAX query window yet in SSMS. I can read your mind and yes I think this is a bad design as well but sure it is probably coming.

Download the DAX script from below (its saved as .sql file so box will format it) and paste it in the new MDX window and you can cross reference with the below for how to perform common SQL functions in DAX.







--1 How do I perform a "select" in DAX? The equivalent is "evaluate".

--2 How do I perform a "where" in DAX? The equivalent is "filter".

--3 How do I perform a "join" in DAX? The equivalent is done by summarize, crossjoin, calculate, etc.    There is not one way here but a few.  In the example provided here I combine sets by using summarize.

--3/4 How do I perform a "group by" in DAX?  The equivalent is summarize.

--4 How do I perform a "count" in DAX?  The equivalent is count, countx, counta , distinctcount.

--4 How do I perform a calculation in DAX?  The equivalent is just like SQL except with DAX functions.

As you can see from (4) most salespeople have success selling Miami Hurricanes Hats; that's probably because they are 5 time National Champions and have the most active players in the NFL.  Go Canes! :-)

Friday, July 27, 2012

Solid State Drive (SSD) Migration


Solid State Drives (SSDs) are all the buzz lately and for good reason, they are very, very fast.  Ask anyone who has had to performance tune SQL Server (especially for a Data Warehouse) and I/O is usually the largest area for improvement.  Not always, but usually due to the fact that hardware today is very powerful and not very expensive.  I have used SSDs on SQL Server to house TempDB for an Operational Data Store (ODS) before, and while it was not cheap it was the first time I have seen hardware thrown at a SQL Server problem and it made such a drastic difference right out of the gate.  Well there was that one time when a client had SQL Server installed on a web server with 1 Processor, 1 GB of RAM, and a pair of local slow rpm drives...I guess that doesn't count.


This post isn't about a SQL Server migration to SSDs and the good results, sorry. That is a good idea and it is in the queue.  This is about how I decided to follow the advice that I give to my clients as a consultant and bought an SSD for better performance (for my laptop) and the steps I took to do it in a few hours.

I use VMWare Workstation on my laptop to run a handful of virtual machines. The more I loaded up the VMs with the SQL Server DB Engine, Sharepoint, Powerview, and SSAS the slower it became.  I bucked up and ordered a 512 GB SSD and wow what a difference.  If you are thinking of purchasing an SSD for your laptop or tower, don't think about it, just do it.  My laptop almost boots instantaneously and the VM performance is probably 10-20x better.

The steps below document how I swapped the SATA drive for an SSD on my Dell XPS 15z and also used an imaging software called Macrium Reflect (that is free) to clone the existing drive.  I thoroughly enjoyed doing this, and my wife just smiled at me because she knows I love taking things apart.  My father would be proud to see how organized this is compared to his Sony rack stereo that had a box of "spare" parts left over when I was 12.

Here are the steps, and hope they will help you because I spent some time learning them...then learning them some more.

1. Buy or borrow an external hard drive.  I used a Seagate GoFlex Ultra eSata. eSata is native to almost all machines while USB requires drivers.  I didn't want to deal with drivers so used Luis Figueroa's eSata drive.  He has the best toys, and every piece of software on MSDN for the last 5 years.

2. Install Macrium Reflect Free Edition.  You can purchase an advanced version but the free edition works just fine. http://www.macrium.com/reflectfree.aspx

3. Image your existing Hard Drive using Macrium and store the .img file on the external drive.

4. Create a "rescue" CD from Macrium.  You will need a CD-R and a burner.  You can do this from USB as well but there could be driver issues.  I took the safe bet.  The Macrium rescue CD automatically puts the bootable OS Windows PE or Linux rescue program on it configured with the Macrium software to restore your .img file.

5. Go into your BIOS (F2 for dell) and change the boot order to use the CD.

6. Now the fun begins.  Unscrew the cover and remove the cover of your laptop.


7. Find the battery and unplug it. (This step is very important could fry your laptop if not done)


8. Locate the hard disk.


9. Unscrew the supporting bracket.  Keep it around as you are going to have to put the SSD in it.


10. Place the new SSD in the bracket and reverse all the steps until you get the cover back on.


11. Place the CD in the drive and boot from it.  Macrium program instantly comes up in Windows PE and select restore image and browse to it on the external drive.

It does take some time to restore.  I had a 274 GB .img file and it took 2 hours and 24 minutes.

Enjoy your lightning fast laptop or tower now!

Tuesday, July 17, 2012

SQL Saturday #151



I received an email over the weekend that I've been selected to speak at SQL Saturday #151 in Orlando on September 29, 2012.  I am honored and excited to have been selected again to speak. I am not going to present on Scaling Out SQL Server with Replication again, but am going to present on SQL Server 2012 Analysis Services Tabular, part of Microsoft's Business Intelligence Semantic Model.

Hope to see you all there!

You can find more info out on the event here: http://www.sqlsaturday.com/151/eventhome.aspx

Friday, July 13, 2012

ETL, Not Always In That Order


Almost everyone in the Data Warehouse business has extracted data from a source system, then applied processes to cleanse it, de-duplicate it, aggregrate it, or many other operations with it, then loaded into the final resting place of a respective Operational Data Store (ODS) or Data Warehouse.  We as data professionals call that process ETL which stands for Extraction, Transformation, and Loading.  Nothing earth shattering about this and used commonly.

At a Data Warehouse client of ours I have heard their employees use the terms ETL and ELT interchangeably. Every time I heard them say this I thought maybe they just got the acronym mixed up; which is understandable.  I don't how many times I confused CTE (common table expression) and called it CET, but know it was more than a handful.

So I asked an employee at the client why he used the acronym in that order and to my surprise he said Extract, Load, then Transform and explained it.  I had that look of a cartoon character when their mouth drops, hits the floor and their tongue rolls out.  Like this...although my overalls and bow tie are much cooler.



I decided to research this. To my surprise both terms are correct and used in the industry as well as with a third term; ETLT.  Now, I'd be remiss if I didn't state that I always speak about ETL as a layer of a BI solution or in general the process of transporting data.  I usually don't speak about the exact procedure of ETL, ELT, ETLT, although I have done it many times.  I've just never spoke about it in this way.

Let's define each procedure and the differences:

ETL: Classically the way Data Warehouse / Business Intelligence professionals categorize the process of extracting data from the source, combining, de-duping, cleansing (transforming) it, and loading it into the target Data Warehouse or ODS. ETL can also be the exact order of the procedure used.

ELT: An alternative procedure to perform the ETL process on the target server. i.e. Extract the data from the source, load it to the ODS or Data Warehouse, then transform it to its final form.  This tends to be heavy on the destination relational database management system where the Data Warehouse or ODS lives. There is a need to have enough horsepower to handle it in the Data Warehouse or the ODS.

ETLT: The same description as ETL above, then one last transformation on the target Data Warehouse or ODS. ETLT is slightly heavier on the destination RDBMS system than ETL is, but not as heavy as ELT due to the fact that in ETLT it is usually a few final transformations and not the bulk of them.

Friday, July 6, 2012

SQL Saturday #141 - South Florida

A year ago I attended my first SQL Saturday in South Florida, and I intentionally did not want to make a blog post until this year's SQL Saturday in South Florida. Okay, I'm lying. I've been extremely busy over the last year with running multiple BI projects for TekPartners BI Solutions, and my wife and I welcomed our second child into the world.  I actually stated last year in my blog post about SQL Saturday #79 that I would present this year and held true to that by presenting "Dividing and Conquering Reporting by Scaling Out with Replication" you can view the slides here.
 I was also going to present on SSAS Tabular in SQL Server 2012 but that will have to wait until SQL Saturday #151 in Orlando.  I had about 25 people in the session and the interaction and questions were great.  Replication is one of those topics for SQL Server that few really know and understand, and most are simply scared of it.  Over the years I spent a lot of time implementing it with clients that really needed to scale out SQL Server for reporting, and its a great tool to facilitate that.

SQL Saturday was a great event put on by the South Florida SQL Server User Group and definitely am going to attend and present at more of them in the future.

TekPartners BI Architects: Luis Figueroa, David Ortiz, and Me 

The sessions I attended were very educational.  Jen Stirrup's (Blog | Twitter | LinkedIn) session on Mobile BI was great and very informative with Azure, Performance Point, and many other great topics. Devin Knight's (Blog | Twitter | LinkedIn) session on Performance Point Dashboards was awesome. Creating a dashboard from whiteboard idea to final product with Analysis Services as a source and very informative.  Luis Figueroa's (Blog | Twitter | LinkedIn) on super fast dimension loading with Change Tracking and Change Data Capture was very informative as well as not many people are very familiar with the change tables and how to benefit from using them in SQL Server 2008+ sources.