Aug 13, 2014: Deep Dive on Self-Service Data Retrieval with Power Query

Matt Masson
Deep Dive on Self-Service Data Retrieval with Power Query

Excel has been able to import data for a long time. Power Pivot allowed you to import data from more sources into the far more powerful and efficient xVelocity engine. These are fairly straightforward extraction and loading functions. However, any real data shaping functions have always been left to proper ETL (Extract, Transform, Load) products like SQL Server Integration Services (SSIS). Power Query adds the “T” part of ETL to Excel and the “Power” line of BI products. This session will demonstrate several of the new options available to data import with Power Query, and walk a real-world data acquisition scenario to demonstrate the unique capabilities of Power Query. When complete, the query will be published into Power BI, where some of the new discoverability features will be shown. After attending this session, you should have a good idea of what Power Query is, where and how it can be used, and what it can do.

    Matt Masson
    Matt (Blog | LinkedIn ) is a Senior Program Manager on the Power Query team. Matt has worked with multiple products across SQL Server, including SQL Server Integration Services (SSIS), Data Quality Services (DQS), Master Data Services (MDS), and the Data Management Gateway for the Power BI release. He has authored two books – SSIS Design Patterns (Apress) and SQL Server 2012 Integration Services (MS Press), and is a frequent presenter at SQL conferences.

Arnie Rowland
Basics: Improving Performance by Thinking Out of the Box

With SQL Server, when things seem to be running slower than expected, we often start examining execution plans, looking at performance metrics and adjusting ‘the usual suspects’. Push this button, twist that knob, move that slider -and see what happens. It is often the case that increasing memory provides the most performance improvement for the least cost -AND the least disruption to the server. And it also often true that refactoring queries, or even databases, will provide for the greatest sustainable gains, but at a large cost. Usually, changing code objects is relatively easy, regression testing is what eats up the budgets for refactoring.

But after more memory has been added, additional CPU cores are brought online, indexes and statistics are refreshed -all of the knobs and buttons have been tweaked, and performance is again sliding down to worrisome levels, what then?

In this discussion, Arnie will guide an exploration into some of the many ways that can be employed in order to remediate performance issues. Models and Standards serve to give direction, but often can be a block to finding viable solutions. The goal of this discussion is to help cement a clear understanding that the first objective is always to keep the business flowing.

    Arnie Rowland
    Arnie [LinkedIn] is a Data Architect, Consultant and Trainer specializing in developer/development issues related to SQL Server. Clients include Fortune 100 enterprises, large scale NGOs, as well as domestic and foreign governments. In addition to facilitating the Oregon SQL –developers user group, he is a SQL Server MVP, a senior moderator for the Microsoft MSDN SQL Server Forums, member of the Microsoft TechNet Wiki Community Council, and co-founder of Portland Code Camp.

Register NowDon’t miss this meeting!

Refreshments graciously provided.

Featured Sponsor: Intertech

    Visit our website for more details.

We wish to acknowledge the OSHU Information Technology Group for supporting Oregon SQL by generously providing the meeting venue.


This entry was posted in Announcements. Bookmark the permalink.