Month: July 2014

My First Pre-Con: SQL Saturday 332–Minnesota

It is with tremendous joy (and a little trepidation) that I announce that I will be doing my very first Pre-Con as part of SQL Saturday 332 in Minnesota in October. I have been presenting for several years now and feel that it is time to take this next big step. Since I love presenting so much, the idea of presenting for a whole day is just awesome. There is a also quite a bit more pressure in this scenario. But that is part of what makes this a great growth experience, stretching myself like I have never done before.

Over the past several months, I have done a lot of client work in Excel dealing with Power Pivot and Pivot Tables. I have also been doing a bit using Power View up in SharePoint, the vast majority of which carries to Excel 2013 as well. I have always been a fan of enabling users to do more with data and learn to be more self-sufficient. My experiences in Excel have reinforced the idea that Excel is a fantastic platform in the Self-Service BI movement. The past several months working with data in Excel have been some of the most fun in my career. So, when I needed a topic for a BI Pre-Con, the choice was easy.

You can find information on all the SQL Saturday 332 Minnesota Pre-cons here. It is an impressive line-up, to be sure. The abstract for mine is below.

Microsoft Excel: The Business Intelligence Platform For The Masses

From gathering and shaping source data through data modeling and visualizations, it is staggering how much you can accomplish in Excel. This Pre-Con will walk you through creating an interesting and powerful BI solution in Microsoft Excel 2013. Whether you are a business user or a technical developer, you will get good value from attending.

1. Power Query 

• Using Power Query to gather source data from various sources both on-premise and in the cloud. 

• Use various transformations on the Ribbon

• Travel back and forth through time via Query Steps

• The basics of Power Query Formula Language (M)

2. Modeling Data With Power Pivot

• Importing Data from various sources

• Linking Tables to data residing directly in Excel sheets

• The importance of Date Tables

• Best Practices

3. DAX 101

• Introduce DAX syntax beginning with Calculated Columns including the mighty Related function

• The basics of the Calculated Fields (Measures)

• Row context/filter context

• The power of the CALCULATE function

• More…

4. Power Pivot/DAX Design Patterns

• Solving real-word problems with Power Pivot

• Many to Many relationships

• Parent-Child Hierarchies

• Segmentation

• More…

5. Excel Pivot Tables/Charts

• Connecting Excel to data sources like SSAS Cubes, Tabular Models, and the internal Power Pivot model

• Pivot Table basics

• Filtering methods and Slicers

• Conditional formatting

• Pivot Charts

• More…

6. Power View

• The basic visualizations (Bars, Columns, Matrix, etc)

• Advanced visualizations (Multiples, Cards, Scatter/Bubble Charts, etc

• Filtering views or the entire report

• Design tips to take great advantage of Power View’s capabilities


I will be focusing on using Excel 2013 on my machine. There is so much to cover that trying to add in Power BI-specifics is just not in the cards. But I think that makes sense as SO many more people have Excel on their machines than are using Power BI right now. And SO few of those people are taking advantage of even a tiny subset of what Excel has to offer. A major goal of this Pre-Con is to help change that.

My PASS Summit 2014 Submission Feedback

Speakers have been asking PASS for feedback regarding their Summit submissions for a few years. This year, following a bit of a heated “discussion,” PASS announced that session feedback would be available upon request. I, like so many other speakers had done, applaud this decision. PASS did make it clear that the both the quantity and the quality of the feedback varies widely.

I am a big proponent of learning from the experiences of others. As such, in the hopes that someone can learn something from the feedback I got, I hereby share what I received. I want to thank the reviewers who took the time to make these comments.

Analysis Services Terms and Concepts For The DBA (REGULAR SESSION – NOT SELECTED)


Despite some overlapping concepts, the worlds of the Relational engine and Analysis Services really are quite different. With more and more organizations realizing the power of Analytics, there is a good chance a BI initiative will come your way at some point.


This session is intended for the DBA that wants/needs to learn more about SQL Server Analysis Services. The goal is to provide a meaningful base of knowledge that will allow you to effectively participate in discussions of Analysis Services in your organization.


Through both slides and demos, you will learn:

— The differences between SSAS Multidimensional and SSAS Tabular

— Key terms like Measures, Dimensions, and Hierarchies

— Storage options such as MOLAP, HOLAP, ROLAP, and Direct Query

— Monitoring with Extended Events

— Overviews of MDX, DAX, and XMLA

— And more


Come take a few steps into the exciting world of Business Intelligence with SQL Server Analysis Services.


Seems this may be a 100 level session
Good topic, sounds more like a 100-level session to me.
No need of prerequisites to be SQL Server Administrator. should also focus on OLAP DW part and schema concept, slice and dice part of SSAS OLAP cube if someone wants to show the power of BI Analytics using SQL server analysis services.
Excellent and useful topic!


DANGER: The Art and Science of Presenting (REGULAR SESSION – NOT SELECTED)


Over the past decade, we have learned a lot about the chemistry of the brain and why humans react the way we do to events in our environment. The idea of Emotional Intelligence – EQ – is a compelling concept that applies this knowledge in a set of learn-able, improvable skills for leading others. Although EQ is often applied to corporate leadership, this session will explain the basics of EQ and demonstrate how you can use it to make your presentations better in the following areas:


• Crafting better slide decks

• Preparing yourself for presenting

• Delivering your content

• Dealing with the unexpected


Understanding and practicing the concepts of EQ can make your presentations a better experience for everyone in the room – including you.


This session was chosen as an Alternate last year and I ended up presenting. It was greatly successful (narrowly missed being in the Top Ten sessions) so I submitted it again, noting to the committee why I was doing so. That should provide some additional context to some of the feedback.


Excellent topic. Excellent consistency across session name, abstract, topic and goals. Perhaps, given the topic, some real examples should have been added. Reference to PASS is 2013 should have been avoided.
While the abstract and topic are great I’m not sure that we would want to see a repeat session from last year.
Delivered too recently at the past Summit. Very targeted audience.
The abstract goes too much into EQ and feels disconnected from the title.


Keeping the "Business" in Business Intelligence (REGULAR SESSION – NOT SELECTED)


It is no accident the term “Business Intelligence” starts with “Business.” Any Business Intelligence initiative should, likewise, start with the needs of the Business. For many years, BI was seen as a technology project. This is one reason why so many BI initiatives fail. Rather than a Technology Project, BI is a Business Program. It must grow and evolve as the Business grows and evolves.


In this session, we will discuss the following:

— Why BI is a worthwhile investment (using case study examples)

— What criteria to use in determining the success of a BI initiative

— Several reasons why BI initiatives fail

— Critical Success Factors for BI


So much of the success for BI happens before the requirements are even gathered. Come learn how you can set yourself up for success with Business Intelligence.


Could be an interesting approach to a rather dry topic
The abstract is clear about what will be discussed as for failures of BI projects. If it has real examples, maybe you can get some demo to demonstrate. You can demo the results in chart, as time and effort, even the results.
Thanks for the abstract.


Power Query: Data Chemistry for The Masses (REGULAR SESSION – SELECTED)


ETL Developers have being doing chemistry with data for years in tools like SQL Server Integration Services. These tools require training, experience, and time that few business users have. But in the age of self-service BI, those business users need a way to shape data to support their analysis.


This session will show how Power Query can be easily used to take advantage of data’s properties to drive the change we need to support our goals.


We will discuss/demonstrate:

— The simple process of accessing a wide variety of data sources

— The ease with which simple transformations can be achieved using the Power Query Ribbon

— Power Query’s fantastic ability to travel through time to see every step taken with the data

— The foundations of the Power Query Formula Language, informally known as "M"

— Using "M" to take Power Query WAY beyond what the Ribbon has to offer.


Come learn about what may well be the most exciting member of the Power BI family.


seems like too much to cover in 75




As with so many aspects of life, a solid foundation makes a huge difference. This Star Trek themed introduction to MDX leads you on a voyage through the terms and concepts necessary for a solid foundation for learning this fascinating language. Terms covered include:

— Measures and Measure Groups

— Attributes and Dimensions

— Hierarchies

— Members

— Tuples

— Sets


This session also shows how you can think about the cube space in a way that is very easy to understand. The word "cube" suggests a 3 dimensional object. That way of thinking is fraught with confusion. Forget about the Rubik’s Cube. It doesn’t help.


With that foundation, we then dive into MDX syntax and fundamentals including:

— Query Axes

— Slicer Axis

— Tuples and Sets

— Hierarchy Navigation Functions

— Crossjoin

— Functions allowing us to travel through time


Come join us for a fun voyage through the cube space and boldly go where no MDX presentation has gone before.


Is the topic about MDX or DAX? Just got a little bit confused. The abstract states what will be discussed and what the analogy comes from. About the level, it may be better to be at level 100 since it is an introduction of MDX.


Getting Started with SSAS Extended Events (LIGHTNING TALK – SELECTED)


With SQL Server Profiler on its way to retirement, our friends on the relational database side of the house have already been taking great advantage of the power of Extended Events (XE). There is a lot of great info out there for using XE against the database engine. For Analysis Services, there is a lot less.


This Lightning Talk will demonstrate how easy it is to get started very quickly with SSAS XE once you have some basic information.


We will demonstrate:

— Creating an SSAS Extended Events Trace which outputs to a .xel file

— Make sure your trace is running via the DISCOVER_TRACES rowset

— Importing the contents of that .xel file into a SQL Server db engine table for analysis

— Deleting the SSAS Extended Events trace


Good topic and the abstract explains exactly what the attendee can expect from the session
Great abstract with details on what will be presented and what to expect to learn!
Thanks for the abstract,It’s good to have someone talk on the  power of Extended Events (XEvents) part.


Reporting Services Pagination Triple Play (LIGHTNING TALK – NOT SELECTED)


The ability to have some control over the pagination of Reporting Services reports has been around a while. But it never hurts to review the fundamentals.


This demonstration will cover:

— Basic pagination in Reporting Services using Rectangles (Love these)

— Adding a page name that carries to Excel exports

— Adding a basic Table of Contents to your multi-page report using Bookmarks

— Adding a more dynamic, data driven Table of Contents to your report using Bookmarks and expressions


Come on out to this ballgame where we hit on  SSRS pagination with a report about three of the most famous infielders in the history of Baseball.



Great abstract
Excellent topic that people always ask about in classes
100 demo!
Level appropriate to content
lots to cover in 10 minutes


My Takeaways

Given that there seems to be a wide range in terms of quality and quantity provided to speakers, I have to say that I feel I made out pretty well here. I am pretty happy with both the quantity and quality here.

I am a little puzzled about the confusion over whether my MDX session is on MDX or DAX. And I think MDX is complex enough that any session on it is at least a 200 level, particularly given that almost everyone learns TSQL first and must “unlearn” some things in order to grasp MDX.

Given that the Keeping the “Business” in Business Intelligence is about concepts and ideas, and not technology, I am not sure how I could add demo to it that would not be contrived in an attempt just to say there was some demo.

Overall, I am pretty happy with this feedback and glad PASS made the decision to make it available.


Interview with Biml Creator Scott Currie

On June 12th, I had the pleasure of presenting to the Greenville Business Intelligence User Group in Greenville, South Carolina. I had a fantastic time and I have to say that the people of Varigence (a major sponsor of this group) showed wonderful hospitality. Part of my trip included a day of hanging out with Biml creator Scott Currie (Blog|Twitter) to learn about Biml, Mist, etc. I didn’t have a chance to play with Biml before heading down there and I have not yet been to a Biml related presentation, so I made it clear to Scott that I was a green field. I told him this at dinner the night before and he just smiled and said, “I’m going to change your life tomorrow.” I have to say that it was not an empty promise. I was blown away by the current functionality Biml as well as the potential for what is possible. If you have not had a chance to look into Biml, I highly recommend you do so. It’s brilliant.

I also had the chance to sit down for an interview with Scott Currie for this little blog of mine to talk about Biml, Mist, and the future of the BI ecosystem. Since I had not used Biml before, I reached out to some great members of our SQL Community that have been using Biml in order to get some of their questions for Scott. I want to thank Catherine Wilhelmsen (Blog|Twitter) and Samuel Vanga (Blog|Twitter) for helping me out.

Below is my interview with Scott. Please note, as with my previous interviews, edits were made, with Scott’s permission, to remove the byproducts of casual conversation for better flow in writing.

Scott Currie Interview


When it comes to Biml, some of the stuff I hear in the market, and some of the perceptions I had before I came here, were that Biml was about creating a lot of SSIS packages at once. But, I’ve never been in a situation where I needed to create 100 packages or 200 packages. What do you say to someone who says, “Well, I don’t think Biml is for me because I don’t have to do that?”


Yeah. That’s something we’ve heard from people in the community, as well, who aren’t core Integration Services developers and aren’t creating tons of staging environments and things. And I think the reason that perception has come about is because it’s the easiest, most obvious example you can show anybody in a half hour to an hour presentation. You don’t need to provide a lot of context to create a staging environment from scratch, for example. And what you get out of it is, just as you noted, hundreds of packages or one big package with hundreds of data flows in it. So, it’s the 101 example that everybody sees and they think, sometimes, that’s all there is to it. Whereas, what we’re seeing people do in the real world with it is usually to start with that because there’s definitely value in being able to automatically create staging environments and other sorts of very rote automation. But then they start to take it further. They start implementing their patterns and practices on top of it in really clever ways. They start adding additional metadata stores. And sometimes their semi-technical or even non-technical people start adding configuration information. And that configuration information can be used to create complex business logic, all of the patterns, all of the logging, unit testing; all of the stuff that normally is the plumbing that takes a lot of time to do is now being auto-generated around configuration information that’s actually adding value to the business. What we see happening is people start with that rote automation and then they start to move into having custom business logic and injecting their patterns into it. Really, the way to think about Biml, after you’ve gotten the core concepts, is to think of it as patterns and frameworks engine that allows you to automate the plumbing, but doesn’t restrict you into a specific approach for that automation. You can implement whatever patterns you want to. You have to, of course, do that implementation. But, once you implemented it, you can do whatever you want to. The sky is the limit. And you can have those patterns interact with custom business logic and you’re not constrained on either side.


For those that may not be familiar with Biml or really just see the SSIS facets of Biml, can you talk a little bit about some of the offerings you already have outside of SSIS, and maybe a little bit about what you can share about what’s coming?


The Bids Helper add-in to BIDS [Business Intelligence Development Studio] and SSDT [SQL Server Data Tools], which is a free and open-source add-in that is available on Codeplex actually includes some of the Biml functionality. It does include a subset, though. It has all the stuff that we have for relational modeling and being able to manage your relational assets. It, additionally, has most of the Integration Services features. You do have to purchase a product in order to get some of the additional stuff. Some of the additional things include Analysis Services functionality. Currently, we support all of SSAS Multidimensional. In our upcoming release, which will be coming later this Summer, we do have SSAS Tabular as well. We also have the ability, in the upcoming release, to do things like metadata modeling and being able to construct, in a very reusable way, some of that metadata that I mentioned that becomes very useful in your more complicated scripting. We have the ability to do things called Transformers. Actually, they’re present right now. They allow you to, in a very modular fashion, specify what a pattern looks like. You can say, in one or multiple Transformers, here’s how I do logging. You automatically add Row Counts on your OLE DB Destinations, including creating the variable to store those, and including the execution of stored procedures to go ahead and write those to the database. This includes Event Handlers. You can put all that into these little Transformers and then you can have the tools actually inject that into your custom logic. There are some very powerful things there. Also, on the Analysis Services side, you can use Transformers to automatically control your measure formats, for example. You can add that to any other automation you already have in place to also automatically build a cube off of your dimensional model. We have a lot of options. In a lot of cases, it’s difficult to talk about individual features because the way we built this out is to provide you with the tools you need to build anything. It’s hard for us to be prescriptive and say “Here’s what you ought to go and build” or “Here’s what you can build” because the answer is, essentially, you can build anything. It’s just that you have to make the decision as to which parts you want to automate and which parts you want to keep custom and manual. And that’s going to be a different analysis that’s done by every single organization that is approaching the tool.


You talked about Bids Helper. This question comes from Catherine and I thought it was a great question: What are the future plans for Biml support in Bids Helper?


In Bids Helper, we’re going to continue to update things so that the subset of functionally that is in Bids Helper is going to be up-to-date. As we bring in additional utility methods to be able to bring in your metadata more quickly, and as we bring in additional methods to be able to very easily construct SQL queries (we already have some)… We’re adding additional helpers all the time to make queries easier to write. There’s also SQL 2014 support. All of that is going to just come along for the ride. As it is implemented in Biml, it is there in Bids Helper. We also know that there are some usability issues with Biml in Bids Helper right now in terms of how strong a development environment you have inside of BIDS and SSDT for Biml. One of the things we are definitely doing in one of the upcoming releases in Biml for Bids Helper is improving the error messaging story so you can get a much clearer picture of exactly what your errors are. And you can navigate your errors a bit more easily than you can today. The other thing that we know is a big issue for Biml in Bids Helper is the code editing story. Right now, when you open up a Biml file in Bids Helper, what you get is essentially the standard XML editor that ships with Visual Studio. And that works OK as long as you’re doing flat Biml. But as soon as you start to put in code nuggets to do your automation, the Visual Studio XML editor doesn’t know how to interpret those. It gets very confused and you lose all of your Intellisense and you get error squiggles saying there are problems when there actually aren’t problems. We are 100% aware that this is a problem and we’re looking at a bunch of different options there. We’ll probably have some announcements to make later on about that. We’re definitely thinking about it and working on it, but we don’t have anything to share just yet, unfortunately, about that piece of the story. Outside of the error messages and the code editing, there are a bunch of value-add services that we could, potentially, build into the Bids Helper story around being able to more easily share scripts and share frameworks. We’re also thinking about those. And we will also have some announcements around those in the future, too.


These next few questions come from Samuel. How did the idea for BIML and the foundation for Varigence and Mist and everything else get formed?


The original idea actually came when I was working at Microsoft. I was on the Developer Tools team there working on Visual Studio. Almost by accident, I fell into what became a data warehousing project. So, I was an application developer essentially working on developer tools who became an accidental DBA in a very real way. And one of the things I noticed, with that particular blend of experience, is that most of what we have learned about doing application and web development really well over the past several decades didn’t find its way into Data development. And I think there are a verity of very good reasons, historical reasons, why that happened. But I thought there might be some benefit to trying to re-imagine those things that we have learned doing application and web development in terms of data development and see if something interesting fell out of it. So, to make a long story short, essentially, what we did was keep in mind there are a lot of parallels in what you can do in web development and the types of problems you try to solve in data development. What if you could have an HTML-like language that would describe your Business Intelligence or data warehouse solution? And then, once you’ve got that, take an ASP.NET type approach in putting code nuggets in to automate it. With that as a foundation, almost all of the things that you normally like to do on application or web development just light up and start working. Source control becomes valuable again. Builds become very very powerful and continuous integration become something that’s very useful. Being able to do automation and patterns-based development and best practices that are enforced for your team; all of these things just start lighting up, almost for free, once you move to that human-readable, writable, declarative HTML-like language with code nuggets interspersed. So that was the original insight. If we had this, we could go ahead and start turning out all these interesting things and start doing data development more efficiently and in a more maintainable way. Of course, it was a long journey to get to the place where we actually implemented all that stuff, which is where we are now. But that was the original insight that actually took place while I was actually building out developer tools, but for Application and Web development.


Given that history, and how you got started, what are your short, medium and long-term goals for Varigence? Where do you see this going?


Short term goals are all about, you could say, finishing the engine. As I mentioned a little bit ago, we’re adding in support for SSAS Tabular in the next version, 4.0, so, with that addition, and some of the enhancements we’re making, we’ve essentially got full coverage in features for Relational, Integration Services, and Analysis Services, including Power Pivot. And that’s going to be a great story. Now we’ve got the basis for building out any solution on top of those technologies. So, the medium-term goal is going to be a combination of two things. One is that we may start biting off additional pieces of the stack and not limiting ourselves to just Relational, Integration Services, and Analysis Services (Power Pivot, too). There’s some really interesting things happening elsewhere in the Microsoft stack when you look at things DQS and MDS and some of the Power BI stuff that’s happening. And we still hear a lot of requests for Reporting Services. Those are all things we’re looking at as potentially building in the medium term and expanding out that engine to have entirely new areas. In the medium and also long term, we’re looking at leveraging the engine in different ways as well. So, once you’ve got that core engine, people start asking the next set of questions like “How can you make it easier for me to manage my metadata?” This is one we’re already working on for 4.0. “How can you make it easier for me to take the solution I’ve built and package it for a hosted solution offering that my professional services company can offer to its clients? How can you make it so it’s easier to put this in the Cloud?” So, there’s all these additional ways of repackaging this engine and providing additional services around it which enable entirely new scenarios. In the medium to long term, that’s where we’re going to have a lot of focus: enhancing all of the services around the engine instead of the engine-level focus that we’ve largely had thus far.


So, with that vision in mind, not just for Varigence, but for that “better way,” what would you say my job, as a BI Developer, would look like in five years?


I think that’s a very interesting question and I would have to stop and ask what you mean by BI Developer in terms of a day-to-day job. Because, I think one of the issues that we’ve all faced in this industry is that, for a lot of reasons, some of them cultural, some of them historical, some of them because of the tools that we work with, we’ve all been forced to wear multiple hats. So, in any given day, I might do Data Analyst work. I might do Data Architecture. I might do BI Developer work, BI Architect work… I kind of have to jump back and forth. Even if I have an Architect at my company who’s providing me with patterns that I should use, generally speaking, they’re giving me some sort of template file that I then have to go and understand and customize for my particular solution. What we think is going to happen, and one of the things we are really trying to enable, is to allow those roles to separate. So that if I’m an Architect, I can provide a pattern, not just in a template, but in a completely reusable code file that the tools can then apply to anything. And as a BI Developer, I might focus on implementing business logic or implementing the spoke, custom parts, the complexity that you can’t automate away. And then let that automatable complexity get handled by the tools that the Architect has driven. I don’t have enough confidence in how things progress after that to say what your day looks like. But I think one of the key things that you can expect to happen, especially if we’re successful in some of the approaches we’re taking, is to have your job be more focused so that you’re not having to wear all of these multiple hats at the same time. If we have our druthers, what we’d really like to do is also make your job a little bit more fun. Part of what we’ve heard from BI Developers in the past is that they never got into doing Business Intelligence because they thought it was fun to drag and drop onto a design surface and implement logging rules for particular regulatory compliance in a particular industry. They got into it because they love data and they love insight and they love being the first person in the world to know something interesting or important about their job or about their world. So, what starts happening is that once we start doing this as our day job, we end up spending more of our time doing the plumbing and doing this sort of “not fun” work and less of our time on the insight generation. And hopefully, through technologies like what we’re building and what we’re seeing happening elsewhere in the industry, that’ll start to shift to where I can have that role separation and focus on the parts of the job that I actually do love. And at the same time, I can have fun doing that again because, in the job that I’m doing, I don’t have to spend the time on the drudgery. More specialization and more fun is hopefully what is in the future for BI Devs.


So, here’s another question from Samuel. With so much emphasis on Self Service, including self service ETL with tools like Power Query that make it easier to move data around, what do you see happening to traditional ETL and SSIS? What do you see evolving from that enablement of end users while, at the same time, retaining that Enterprise Class complexity where it is needed?


That’s one of the challenging things with the message around Self Service BI. I’ve certainly seen people polarize into different camps where some folks will say that Self Service BI is the future and traditional ETL and IT developed solutions are going to be a thing of the past; no one’s going to use them anymore. And, of course, I’ve seen the polar opposite where some people are saying that Self Service BI is a flash in the pan; you can’t solve any real problems with it; there’s no real organizational benefit to having it and it causes more problems than it’s worth. I’ve definitely heard people saying both of those things. I think that the challenge in all of that, or the issue with all of that, is that people are trying to apply those technologies or those solutions to problems that they’re really not well suited to solve. Self Service BI has an excellent and important role in the organization for being able to empower individual decision makers to get their questions answered at the speed of the Business rather than at the speed of the technology. Often times, there’s a mismatch there. But at the same time, your Self Service BI tools are never going to work well if your data is not already in decent shape. If you’re plugging bad data into Self Service BI, you’re going to get bad insights out. And if you’re taking data that is not well formatted or well aligned to the types of questions you’re trying to ask, you’re not going to be able to get your questions answered unless you transform that data in a way that actually aligns it with those answers. And any tool that can do that needs to be complex enough to handle the complexity of those transformations. So, if I have a Self Service BI tool that is intended to solve ANY ETL problem, it’s very rapidly going to become as complex as Integration Services because you need that level of complexity in order to solve those data integration problems. There’s no getting around it: you can’t wave a magic wand and make complexity go away. And that’s a good thing. If you could do that, we’d all be out of a job. So, what I think the future looks like is that you’re going to have a very strong presence in Self Service. Self Service is going to be part of that last mile story for Data. But that’s going to make the person who’s day job it is to make that Enterprise Class data warehouse or Enterprise Class Tabular model even more important because their work is going to be much more heavily leveraged. Instead of just leveraging that work through canned reports, we’re also going to be leveraging that work through all of these additional Self Service models. That, I think, is great for both sides of the fence. But you have to pick and choose your solution for the right business problem and not say, “I’ve got a solution, so, I am going to go solve every problem with it.”


Here’s another question from Catherine. How has the Effektor/Mist integration impacted Biml? And please talk a bit about what that integration is.


Absolutely. We’ve got a great partner that’s based in Copenhagen, Denmark, even though they have offices all throughout the Nordic countries. In addition to being a great consulting and professional services partner, they also have a product called Effektor which enables nontechnical users, through configuration, to take a well prepared data warehouse or data mart and build out a whole bunch of additional features on top of it, including cubes and workflows. There’s a bunch of stuff that you can do with it. It’s consistent with the philosophy of Biml; i.e. it’s better, where possible, to have configuration instead of having to code everything up from scratch. And one of the things that they noticed as they were building out more and more functionality is that it made sense for them, instead of building out an engine that could do all of the Integration Services and Analysis Services code generation, to just use the Biml engine for that instead. So, they could focus on the parts that were important to the business and say here’s the type of metadata that we need and here’s the type of configuration UI that we need and here’s how we’re going to translate that into our patterns and practices. Then they can plug all that into the Biml engine and let us do all the code generation bits. So, we announced recently that they are actually going to be integrating in the Biml engine for their code generation into Effektor and they’re also going to be providing Mist as an option for some of the custom logic generation that supplements what’s available inside the Effektor product. In terms of new directions for Mist and Biml, I think the interesting thing about that question is that the way we have architected Mist and Biml does not require that we change our direction in order for new and interesting things to happen. You don’t have to rely on US to do things for you. So, the fact that Effektor is now enabling these new scenarios means that the Mist and Biml ecosystem is going to progress into this new direction without our having to make any product changes. So, I think there are definitely going to be things that we do to make the development of Effektor easier and let them focus even more on their areas of expertise and less on some of the code generation, which is our are of expertise. But really, I think the more interesting thing is that now that the Mist and Biml story are enhanced by this additional Effektor functionality, that’s a new direction in and of itself. And that’s the thing that the most exciting. We love the Effektor relationship and we love the relationship with our other partners as well because they start using the product in ways that we would never have the resources to enable ourselves if we were doing all development independently. And in a lot of cases, they do things that we never really anticipated or thought of. And that’s something that’s really rewarding, I think, as a tools developer because it tells you that you got it right. If somebody is successful in using your tool in a way that you didn’t intend up front, that you didn’t plan for, that means you built a really robust and really useful tool. I’m definitely of the mind that nobody can solve all of the problems. But somebody can provide building blocks. And your building blocks are only as good as the problems that get solved with them. So, if we’re solving all these interesting problems, it means we built good building blocks, which is rewarding.


Here is another question from Catherine. Are there any plans down the road for Biml books?


Absolutely. We have a couple of books actually under development right now. We have a stable of authors that we have recruited. Actually, it wasn’t much of a recruiting effort. We were originally thinking about doing just one book. And we went and approached a list of authors because we wanted it to be sort of a community collaborative sort of thing where we had a recognizable author writing each chapter. And we had a list of authors that we wanted to approach and we made the list a lot longer than we thought it needed to be because we thought maybe half or two thirds of them would say No. The funny part is that when we went and approached them, I think every single one said Yes. So, we had more authors than we were originally intending. The solution there was to go ahead and do two books. We’re still in the very early stages. It’s going to take a while; books take a long time to get out the door. We are going to have one book whose working title is “Biml: The Definitive Reference” which will be an end to end resource to learn everything you need to know to be effective with Biml. It will have reference material and conceptual descriptions as well. And then the other one is a Biml cookbook. So the chapters will be devoted to specific problems you need to solve then options for different patterns that you could use. For example, we might have a chapter on Unit Testing with all sort of different recipes that you could use to do Unit Testing very effectively with Biml. Or another chapter on different patterns like doing a Type I, Type II slowly changing dimensions, etc, in Biml and various different options for doing that. So that, I think, is going to be great. It’s going to cover both ends of it. For the people that just like to sit down and become and experts on stuff, we’re going to have an option there. And for people that prefer to wait until they are confronted with a problem and then go read specifically about that, we’re going to have an option there, too. Unfortunately, we’re not announcing a timeframe on that yet because we don’t want to get it wrong and we’re still a little too early to know exactly what the ship date is going to be. But, it’s definitely something that is an active area of work for us.


Catherine helped with this question, as well. So, right now, there are members of the community just stepping up to do presentations. And sometimes those presentations happen in places that might not be feasible for you guys to get to because cost, and whatever else it may be, and there’s a limited number of people you have here at Varigence. Are there any plans to have kind of a Train the Trainer or some sort of certification program to say, “This person is a certified Biml Trainer” and give them access to stuff. I’m thinking of the Microsoft Certified Trainer. And obviously that’s very robust thing. But are there plans for something like that?


Yeah. Absolutely. And in fact, we’re past the planning stage on that. We actually have something in Production right now on that. But, first just a note about the Community talks. A lot of folks aren’t aware of this, and hopefully we’re going to start doing a better job of publicizing these talks that are happening worldwide, but if you look at just the past six months, so January 1 thru June 30, worldwide, we have had 84 talks across 15 countries with 22 distinct speakers. And that’s incredibly rewarding and I just want to give a huge Thank You to the Community. That’s something where there is absolutely no way we could get that amount of reach on our own. And these are people who don’t work for Varigence; they do it because they love the technology and they love telling the story to others. So, I think we’ve already got something great there. Of course, your one-hour Community event isn’t a replacement for training where you’re actually able to go on site and say, “Here’s your full training program” where, at the end of it, you’ve got the expectation that the Team’s just going to hit the ground running and start working directly. So, for that, we work through partners. We do have a training program we can offer directly, but we prefer to work through partners as often as possible because we don’t think there is any way, going back to some previous conversations, we can be as effective training somebody in say, the Healthcare vertical, as a professional services company that spends all their time in Healthcare. We would rather say, “Let’s get you set up on Biml” instead of saying “Let’s get our trainers able to talk Healthcare.” So, we already have a program; we have a Consulting Partnership program. If there are consulting companies out there that are interested in being able to work directly with Biml and actually train with it as well get in contact with us. We have a Train the Trainer program and a whole bunch of materials that are out there. So, essentially, we have pre-canned offerings that you can start with and then tweak. We have an hour long, two hours long, half day, full day, a three-day and full week sessions. So, we’ve got all of those pre-canned and of course you can tweak them and you can supplement. They’re module based, so you can add in your own modules. You can tweak to use your own branding. You can tweak them to change some of the messaging if there are particular aspects that you know would be more or less interesting to your particular client or your particular industry vertical. So, we’ve already got all that set up and if anybody’s interested in engaging on that, please don’t hesitate to contact us.

Wrapping Up

That takes care of the interview with Scott. I have to thank Scott very much for his time for this and the time he spent showing me Biml and Mist. And I need to thank Catherine and Samuel for their help in coming up with appropriate questions for this interview. Overall, I was immensely impressed with the amazing work that Varigence, with Scott’s leadership and vision, have done with Biml and Mist. I am now planning how I can roll time for learning Biml into my schedule around everything else going on. I am sure I can justify the ROI.

I hope this helps shed some light on Biml and where it is headed.