Category: Interviews

PASS Summit 2014 PASS Board Q and A

At PASS Summit 2014 in Seattle, I attended a couple Q and A sessions with the PASS Board of Directors members. There were three such sessions. I missed the general Q and A as my Summit presentation was at that same time. Rather than try to capture the play by play of everything, I decided to distil all my notes into a single post that is more of an article. I have not really tried this type of journalistic article before, so I hope you will bear with me.

For the past two years, the PASS Board election process has been somewhat less than smooth from a Community perspective. Last year, there was uproar over the fact that people with multiple sqlpass.org accounts could vote from each account, thereby allowing them to vote multiple times. To try to rectify that situation prior to the 2014 election, PASS instituted a policy that required people to update their PASS profile in order to be eligible to vote. The plan was to also look for duplication in the accounts and have people with multiple accounts choose a single one to serve as their account. As listed in this post by PASS President Thomas LaRock (Blog|Twitter) on the PASS blog, PASS communicated this requirement many times on many different channels. Despite this, there were many members that did not receive the message and/or take necessary action to ensure their eligibility to vote in this year’s election. Once again, there was a massive outcry against PASS over this situation. When asked about this situation, PASS Executive Vice President, Finance and Governance, Adam Jorgensen (Blog|Twitter) replied, “A significant amount of work went into the communication plan on this. People didn’t get an email, but email was not the only thing.” Given the fact that PASS tried so many avenues of communication, Adam wants to ask PASS members the following: “What is the BEST way to make sure, when we communicate with folks, that they read it AND take action? What is the best vehicle? What channels are most effective for them?”

All of this outcry started once the election began and people discovered they were not eligible to vote. There was an understandable amount of frustration for people whose reactions suggested they felt they were being disenfranchised. The PASS Board responded to this outcry by extending the election to allow people more time to update their profiles. I asked what went into making that change. Adam explained that the PASS organization is governed by Bylaws as well as the laws of the state of Illinois. As such, it is not going to be able to turn on a dime. According to Adam, “We had to decide if wanted (and if so how) to change things and extend the election. This required 12 people in a very short time. HQ did a great job to help coordinate that. We had to talk to our legal counsel to assess the ramifications of changing things midstream.” Denise McInerney (Blog|Twitter), PASS Vice President, Marketing, added to this, “HQ did an amazing amount of work during that election change. A lot of credit goes to HQ for enabling us to do this as quickly as we did.” Adam went on to describe some of the thinking that was involved in this process. “Is it good governance or bad governance to extend or change the election in the middle? LOTS of conversation on that.” In the end, the Board made a conclusion that I agree with: “[It is] bad governance to change the election, BUT governance isn’t just about trying to err on the side of good or bad governance, but also about keeping a community what we want it to be.” The Board decided it was more important to keep the community whole than to perfectly adhere to policies they had enacted.

Another topic that has ben on people’s minds is the decision to commit to continuing to develop the Business Analytics (BA) Conference. At the Blogger Q and A, Thomas LeBlanc (Blog|Twitter) asked why this investment was going to continue given the challenges it has faced. Thomas LaRock gave the first response. “Three years ago, Microsoft came to us and said, ‘You guys have built something incredible. Can you go find the Business Analytics folks?” Basically, PASS has helped to build and foster an amazing community around Microsoft data technologies in terms of those who develop tools, and provided the services for others to do so. Microsoft asked PASS to try to do the same thing for the people who consume those services and that data. In the age of Self Service and tools like Power Pivot, there is certainly some overlap in terms of people enable analysis of data and those of perform that analysis. But largely, the audience for the BA Conference is one that PASS has not really targeted or served before.

Regarding the fact that the BA Conference has not really found its footing yet, Denise contributed, “You learn as you go. We didn’t quite hit the mark on getting the program and audience matched up. We tried to be too many things to too many people.” She added that PASS does Community very well. I have to agree on that. And that Community really helps people develop and learn. Attendees at PASS events often have great experiences of learning something cool and saying, “I’m going to try this at work on Monday.” According to Denise, “We want to create THAT experience for the business data user.”

Adam acknowledged that there have been questions around why PASS didn’t just do a BI (Business Intelligence) conference. “It’s not about BI. And if you do a BI conference, where does half of Summit go?” He then added what I feel is a great point, “We don’t want to split audiences up; we want to bring them together.” This is key as it reinforces that understanding that many of us have that the audience for the BA Conference is not really a subset of the existing PASS Summit audience; it is a different group of people that PASS has not served before. As Adam explained, “Are we taking something away? We’re not. We are building something new that is additive.”

Regarding BA Conference location, the first was in Chicago. Last year was held in San Jose, CA. Next year will be in Santa Clara CA. 2016 is back in San Jose. Thomas LeBlanc asked why the BA Conference is not as mobile as Summit. According to Denise, “Silicon Valley was strategic. Silicon Valley is on the leading edge with what is happening with Analytics. By locating there, we thought we would have access to a pool of speakers that would be local to the conference.” The idea is that this would make it easier to get speakers. Locating in Silicon Valley is not just about speakers, though. Denise continued, “Lots of the target audience lives there. Exhibitors would be nearby and it would be easy to get them.”

The topic of Speakers brought forth the topic of the decision NOT to have a Community Call For Speakers for the BA Conference this year. Instead, the speakers will be invited only. For many, this is seen as a problem. I, myself, being a speaker who has content appropriate for business users, was disappointed by this decision at first. But the more I thought about it, the more it made sense to me. In the existing PASS Community, we have a lot of presenters with content around BI. But we have very few with analytics topics. My own content around showing people how to use Power Pivot is still BI, even if it is aimed at the business user. It is still about enabling analytics as opposed to performing analytics. Adam pointed out that when Summit first started, there was not Community Call For Speakers. According to Adam, “We have looked at the [evaluation] results over the past few years. People liked it but the program was mis-targeted to them.” Some have asked why the board did not choose to do a community call for a percentage of the sessions. Adam indicated that the board discussed that and arrived at this though: “What percentage is acceptable to be NOT OK with attendees? That would be zero.” I have to agree with Adam, here. That is also why I totally understand why I did not receive an invitation to speak. I have not demonstrated that I have content appropriate to an Analytics audience.

At this point it is appropriate to bring in some comments from the separate Q and A about the BA Conference. Jen Stirrup (Blog|Twitter), Director-at-Large, Virtual Chapters, gave further explanation behind the decision to keep forging ahead. “We did a lot of research with the BA Conference. People are really excited in Microsoft products, so that will continue to be a base. But, Tableau, for example, has its own user conference. We don’t want to just create another Tableau conference. I have spoken at those. They were much more sales driven and about user stories. With PASS, the main emphasis is on practicality. Learn on one day and apply it on the next.” She then put a key difference between BI and Analytics that I had not heard before: “BI projects focus on deliverables. BA stuff focus on business value.” That makes sense to me. With BI, we work to produce some object, whether it be a data model, cube, report, dashboard… As Jen added about Business Analytics, “[it] is less time-boxed and more value oriented.”

Amy Lewis (Blog|Twitter), Director-at-Large, PASS Programs, made a fine point here as well: “One thing we did poorly before was focusing on the tools. We saw we need to stop focusing on the architect perspective and more on the data analyst standpoint.” When asked about the personas present in the intended audience, Denise added, “We have come to understand it is not about job title. It was about what they do all day. What they have in common is Find Data, Make Sense, Produce Something With Impact.” Denise went on to point out that there were marketing issues in the past. “As far as Marketing, last year people thought it was a SQL conference. It has its own site, now. All the messaging is consistent now to clarify that.” Head on over to the new website to learn more about the new and improved messaging.

I want to close with a direct request from the Board. Adam pointed out that the PASS Board are fairly reachable people and are happy to respond to questions they receive. He stressed, “I would request that if you want to know the answer to a question, ASK THE QUESTION. We are happy to have conversation, but the judge, jury, and executioner style is just NOT productive.” Adam gave a great example of a blogger who wanted some facts for a blog post and ended up having a great conference call with several board members to get the information he needed. Tom then added, “I have really enjoyed this blogger Q and A at Summit. We should find a way to do this more.” We discussed the idea of doing this quarterly or something and in a format like #DataChat on Twitter happens now. That sounds like a good plan and a good way to help people who may feel they are not being heard to have another avenue to speak out.

I hope you found this helpful. I certainly feel better about these topics having attended the Q and A sessions and put in the work to write this up.

Interview with Biml Creator Scott Currie

On June 12th, I had the pleasure of presenting to the Greenville Business Intelligence User Group in Greenville, South Carolina. I had a fantastic time and I have to say that the people of Varigence (a major sponsor of this group) showed wonderful hospitality. Part of my trip included a day of hanging out with Biml creator Scott Currie (Blog|Twitter) to learn about Biml, Mist, etc. I didn’t have a chance to play with Biml before heading down there and I have not yet been to a Biml related presentation, so I made it clear to Scott that I was a green field. I told him this at dinner the night before and he just smiled and said, “I’m going to change your life tomorrow.” I have to say that it was not an empty promise. I was blown away by the current functionality Biml as well as the potential for what is possible. If you have not had a chance to look into Biml, I highly recommend you do so. It’s brilliant.

I also had the chance to sit down for an interview with Scott Currie for this little blog of mine to talk about Biml, Mist, and the future of the BI ecosystem. Since I had not used Biml before, I reached out to some great members of our SQL Community that have been using Biml in order to get some of their questions for Scott. I want to thank Catherine Wilhelmsen (Blog|Twitter) and Samuel Vanga (Blog|Twitter) for helping me out.

Below is my interview with Scott. Please note, as with my previous interviews, edits were made, with Scott’s permission, to remove the byproducts of casual conversation for better flow in writing.

Scott Currie Interview

Mark

When it comes to Biml, some of the stuff I hear in the market, and some of the perceptions I had before I came here, were that Biml was about creating a lot of SSIS packages at once. But, I’ve never been in a situation where I needed to create 100 packages or 200 packages. What do you say to someone who says, “Well, I don’t think Biml is for me because I don’t have to do that?”

Scott

Yeah. That’s something we’ve heard from people in the community, as well, who aren’t core Integration Services developers and aren’t creating tons of staging environments and things. And I think the reason that perception has come about is because it’s the easiest, most obvious example you can show anybody in a half hour to an hour presentation. You don’t need to provide a lot of context to create a staging environment from scratch, for example. And what you get out of it is, just as you noted, hundreds of packages or one big package with hundreds of data flows in it. So, it’s the 101 example that everybody sees and they think, sometimes, that’s all there is to it. Whereas, what we’re seeing people do in the real world with it is usually to start with that because there’s definitely value in being able to automatically create staging environments and other sorts of very rote automation. But then they start to take it further. They start implementing their patterns and practices on top of it in really clever ways. They start adding additional metadata stores. And sometimes their semi-technical or even non-technical people start adding configuration information. And that configuration information can be used to create complex business logic, all of the patterns, all of the logging, unit testing; all of the stuff that normally is the plumbing that takes a lot of time to do is now being auto-generated around configuration information that’s actually adding value to the business. What we see happening is people start with that rote automation and then they start to move into having custom business logic and injecting their patterns into it. Really, the way to think about Biml, after you’ve gotten the core concepts, is to think of it as patterns and frameworks engine that allows you to automate the plumbing, but doesn’t restrict you into a specific approach for that automation. You can implement whatever patterns you want to. You have to, of course, do that implementation. But, once you implemented it, you can do whatever you want to. The sky is the limit. And you can have those patterns interact with custom business logic and you’re not constrained on either side.

Mark

For those that may not be familiar with Biml or really just see the SSIS facets of Biml, can you talk a little bit about some of the offerings you already have outside of SSIS, and maybe a little bit about what you can share about what’s coming?

Scott

The Bids Helper add-in to BIDS [Business Intelligence Development Studio] and SSDT [SQL Server Data Tools], which is a free and open-source add-in that is available on Codeplex actually includes some of the Biml functionality. It does include a subset, though. It has all the stuff that we have for relational modeling and being able to manage your relational assets. It, additionally, has most of the Integration Services features. You do have to purchase a product in order to get some of the additional stuff. Some of the additional things include Analysis Services functionality. Currently, we support all of SSAS Multidimensional. In our upcoming release, which will be coming later this Summer, we do have SSAS Tabular as well. We also have the ability, in the upcoming release, to do things like metadata modeling and being able to construct, in a very reusable way, some of that metadata that I mentioned that becomes very useful in your more complicated scripting. We have the ability to do things called Transformers. Actually, they’re present right now. They allow you to, in a very modular fashion, specify what a pattern looks like. You can say, in one or multiple Transformers, here’s how I do logging. You automatically add Row Counts on your OLE DB Destinations, including creating the variable to store those, and including the execution of stored procedures to go ahead and write those to the database. This includes Event Handlers. You can put all that into these little Transformers and then you can have the tools actually inject that into your custom logic. There are some very powerful things there. Also, on the Analysis Services side, you can use Transformers to automatically control your measure formats, for example. You can add that to any other automation you already have in place to also automatically build a cube off of your dimensional model. We have a lot of options. In a lot of cases, it’s difficult to talk about individual features because the way we built this out is to provide you with the tools you need to build anything. It’s hard for us to be prescriptive and say “Here’s what you ought to go and build” or “Here’s what you can build” because the answer is, essentially, you can build anything. It’s just that you have to make the decision as to which parts you want to automate and which parts you want to keep custom and manual. And that’s going to be a different analysis that’s done by every single organization that is approaching the tool.

Mark

You talked about Bids Helper. This question comes from Catherine and I thought it was a great question: What are the future plans for Biml support in Bids Helper?

Scott

In Bids Helper, we’re going to continue to update things so that the subset of functionally that is in Bids Helper is going to be up-to-date. As we bring in additional utility methods to be able to bring in your metadata more quickly, and as we bring in additional methods to be able to very easily construct SQL queries (we already have some)… We’re adding additional helpers all the time to make queries easier to write. There’s also SQL 2014 support. All of that is going to just come along for the ride. As it is implemented in Biml, it is there in Bids Helper. We also know that there are some usability issues with Biml in Bids Helper right now in terms of how strong a development environment you have inside of BIDS and SSDT for Biml. One of the things we are definitely doing in one of the upcoming releases in Biml for Bids Helper is improving the error messaging story so you can get a much clearer picture of exactly what your errors are. And you can navigate your errors a bit more easily than you can today. The other thing that we know is a big issue for Biml in Bids Helper is the code editing story. Right now, when you open up a Biml file in Bids Helper, what you get is essentially the standard XML editor that ships with Visual Studio. And that works OK as long as you’re doing flat Biml. But as soon as you start to put in code nuggets to do your automation, the Visual Studio XML editor doesn’t know how to interpret those. It gets very confused and you lose all of your Intellisense and you get error squiggles saying there are problems when there actually aren’t problems. We are 100% aware that this is a problem and we’re looking at a bunch of different options there. We’ll probably have some announcements to make later on about that. We’re definitely thinking about it and working on it, but we don’t have anything to share just yet, unfortunately, about that piece of the story. Outside of the error messages and the code editing, there are a bunch of value-add services that we could, potentially, build into the Bids Helper story around being able to more easily share scripts and share frameworks. We’re also thinking about those. And we will also have some announcements around those in the future, too.

Mark

These next few questions come from Samuel. How did the idea for BIML and the foundation for Varigence and Mist and everything else get formed?

Scott

The original idea actually came when I was working at Microsoft. I was on the Developer Tools team there working on Visual Studio. Almost by accident, I fell into what became a data warehousing project. So, I was an application developer essentially working on developer tools who became an accidental DBA in a very real way. And one of the things I noticed, with that particular blend of experience, is that most of what we have learned about doing application and web development really well over the past several decades didn’t find its way into Data development. And I think there are a verity of very good reasons, historical reasons, why that happened. But I thought there might be some benefit to trying to re-imagine those things that we have learned doing application and web development in terms of data development and see if something interesting fell out of it. So, to make a long story short, essentially, what we did was keep in mind there are a lot of parallels in what you can do in web development and the types of problems you try to solve in data development. What if you could have an HTML-like language that would describe your Business Intelligence or data warehouse solution? And then, once you’ve got that, take an ASP.NET type approach in putting code nuggets in to automate it. With that as a foundation, almost all of the things that you normally like to do on application or web development just light up and start working. Source control becomes valuable again. Builds become very very powerful and continuous integration become something that’s very useful. Being able to do automation and patterns-based development and best practices that are enforced for your team; all of these things just start lighting up, almost for free, once you move to that human-readable, writable, declarative HTML-like language with code nuggets interspersed. So that was the original insight. If we had this, we could go ahead and start turning out all these interesting things and start doing data development more efficiently and in a more maintainable way. Of course, it was a long journey to get to the place where we actually implemented all that stuff, which is where we are now. But that was the original insight that actually took place while I was actually building out developer tools, but for Application and Web development.

Mark

Given that history, and how you got started, what are your short, medium and long-term goals for Varigence? Where do you see this going?

Scott

Short term goals are all about, you could say, finishing the engine. As I mentioned a little bit ago, we’re adding in support for SSAS Tabular in the next version, 4.0, so, with that addition, and some of the enhancements we’re making, we’ve essentially got full coverage in features for Relational, Integration Services, and Analysis Services, including Power Pivot. And that’s going to be a great story. Now we’ve got the basis for building out any solution on top of those technologies. So, the medium-term goal is going to be a combination of two things. One is that we may start biting off additional pieces of the stack and not limiting ourselves to just Relational, Integration Services, and Analysis Services (Power Pivot, too). There’s some really interesting things happening elsewhere in the Microsoft stack when you look at things DQS and MDS and some of the Power BI stuff that’s happening. And we still hear a lot of requests for Reporting Services. Those are all things we’re looking at as potentially building in the medium term and expanding out that engine to have entirely new areas. In the medium and also long term, we’re looking at leveraging the engine in different ways as well. So, once you’ve got that core engine, people start asking the next set of questions like “How can you make it easier for me to manage my metadata?” This is one we’re already working on for 4.0. “How can you make it easier for me to take the solution I’ve built and package it for a hosted solution offering that my professional services company can offer to its clients? How can you make it so it’s easier to put this in the Cloud?” So, there’s all these additional ways of repackaging this engine and providing additional services around it which enable entirely new scenarios. In the medium to long term, that’s where we’re going to have a lot of focus: enhancing all of the services around the engine instead of the engine-level focus that we’ve largely had thus far.

Mark

So, with that vision in mind, not just for Varigence, but for that “better way,” what would you say my job, as a BI Developer, would look like in five years?

Scott

I think that’s a very interesting question and I would have to stop and ask what you mean by BI Developer in terms of a day-to-day job. Because, I think one of the issues that we’ve all faced in this industry is that, for a lot of reasons, some of them cultural, some of them historical, some of them because of the tools that we work with, we’ve all been forced to wear multiple hats. So, in any given day, I might do Data Analyst work. I might do Data Architecture. I might do BI Developer work, BI Architect work… I kind of have to jump back and forth. Even if I have an Architect at my company who’s providing me with patterns that I should use, generally speaking, they’re giving me some sort of template file that I then have to go and understand and customize for my particular solution. What we think is going to happen, and one of the things we are really trying to enable, is to allow those roles to separate. So that if I’m an Architect, I can provide a pattern, not just in a template, but in a completely reusable code file that the tools can then apply to anything. And as a BI Developer, I might focus on implementing business logic or implementing the spoke, custom parts, the complexity that you can’t automate away. And then let that automatable complexity get handled by the tools that the Architect has driven. I don’t have enough confidence in how things progress after that to say what your day looks like. But I think one of the key things that you can expect to happen, especially if we’re successful in some of the approaches we’re taking, is to have your job be more focused so that you’re not having to wear all of these multiple hats at the same time. If we have our druthers, what we’d really like to do is also make your job a little bit more fun. Part of what we’ve heard from BI Developers in the past is that they never got into doing Business Intelligence because they thought it was fun to drag and drop onto a design surface and implement logging rules for particular regulatory compliance in a particular industry. They got into it because they love data and they love insight and they love being the first person in the world to know something interesting or important about their job or about their world. So, what starts happening is that once we start doing this as our day job, we end up spending more of our time doing the plumbing and doing this sort of “not fun” work and less of our time on the insight generation. And hopefully, through technologies like what we’re building and what we’re seeing happening elsewhere in the industry, that’ll start to shift to where I can have that role separation and focus on the parts of the job that I actually do love. And at the same time, I can have fun doing that again because, in the job that I’m doing, I don’t have to spend the time on the drudgery. More specialization and more fun is hopefully what is in the future for BI Devs.

Mark

So, here’s another question from Samuel. With so much emphasis on Self Service, including self service ETL with tools like Power Query that make it easier to move data around, what do you see happening to traditional ETL and SSIS? What do you see evolving from that enablement of end users while, at the same time, retaining that Enterprise Class complexity where it is needed?

Scott

That’s one of the challenging things with the message around Self Service BI. I’ve certainly seen people polarize into different camps where some folks will say that Self Service BI is the future and traditional ETL and IT developed solutions are going to be a thing of the past; no one’s going to use them anymore. And, of course, I’ve seen the polar opposite where some people are saying that Self Service BI is a flash in the pan; you can’t solve any real problems with it; there’s no real organizational benefit to having it and it causes more problems than it’s worth. I’ve definitely heard people saying both of those things. I think that the challenge in all of that, or the issue with all of that, is that people are trying to apply those technologies or those solutions to problems that they’re really not well suited to solve. Self Service BI has an excellent and important role in the organization for being able to empower individual decision makers to get their questions answered at the speed of the Business rather than at the speed of the technology. Often times, there’s a mismatch there. But at the same time, your Self Service BI tools are never going to work well if your data is not already in decent shape. If you’re plugging bad data into Self Service BI, you’re going to get bad insights out. And if you’re taking data that is not well formatted or well aligned to the types of questions you’re trying to ask, you’re not going to be able to get your questions answered unless you transform that data in a way that actually aligns it with those answers. And any tool that can do that needs to be complex enough to handle the complexity of those transformations. So, if I have a Self Service BI tool that is intended to solve ANY ETL problem, it’s very rapidly going to become as complex as Integration Services because you need that level of complexity in order to solve those data integration problems. There’s no getting around it: you can’t wave a magic wand and make complexity go away. And that’s a good thing. If you could do that, we’d all be out of a job. So, what I think the future looks like is that you’re going to have a very strong presence in Self Service. Self Service is going to be part of that last mile story for Data. But that’s going to make the person who’s day job it is to make that Enterprise Class data warehouse or Enterprise Class Tabular model even more important because their work is going to be much more heavily leveraged. Instead of just leveraging that work through canned reports, we’re also going to be leveraging that work through all of these additional Self Service models. That, I think, is great for both sides of the fence. But you have to pick and choose your solution for the right business problem and not say, “I’ve got a solution, so, I am going to go solve every problem with it.”

Mark

Here’s another question from Catherine. How has the Effektor/Mist integration impacted Biml? And please talk a bit about what that integration is.

Scott

Absolutely. We’ve got a great partner that’s based in Copenhagen, Denmark, even though they have offices all throughout the Nordic countries. In addition to being a great consulting and professional services partner, they also have a product called Effektor which enables nontechnical users, through configuration, to take a well prepared data warehouse or data mart and build out a whole bunch of additional features on top of it, including cubes and workflows. There’s a bunch of stuff that you can do with it. It’s consistent with the philosophy of Biml; i.e. it’s better, where possible, to have configuration instead of having to code everything up from scratch. And one of the things that they noticed as they were building out more and more functionality is that it made sense for them, instead of building out an engine that could do all of the Integration Services and Analysis Services code generation, to just use the Biml engine for that instead. So, they could focus on the parts that were important to the business and say here’s the type of metadata that we need and here’s the type of configuration UI that we need and here’s how we’re going to translate that into our patterns and practices. Then they can plug all that into the Biml engine and let us do all the code generation bits. So, we announced recently that they are actually going to be integrating in the Biml engine for their code generation into Effektor and they’re also going to be providing Mist as an option for some of the custom logic generation that supplements what’s available inside the Effektor product. In terms of new directions for Mist and Biml, I think the interesting thing about that question is that the way we have architected Mist and Biml does not require that we change our direction in order for new and interesting things to happen. You don’t have to rely on US to do things for you. So, the fact that Effektor is now enabling these new scenarios means that the Mist and Biml ecosystem is going to progress into this new direction without our having to make any product changes. So, I think there are definitely going to be things that we do to make the development of Effektor easier and let them focus even more on their areas of expertise and less on some of the code generation, which is our are of expertise. But really, I think the more interesting thing is that now that the Mist and Biml story are enhanced by this additional Effektor functionality, that’s a new direction in and of itself. And that’s the thing that the most exciting. We love the Effektor relationship and we love the relationship with our other partners as well because they start using the product in ways that we would never have the resources to enable ourselves if we were doing all development independently. And in a lot of cases, they do things that we never really anticipated or thought of. And that’s something that’s really rewarding, I think, as a tools developer because it tells you that you got it right. If somebody is successful in using your tool in a way that you didn’t intend up front, that you didn’t plan for, that means you built a really robust and really useful tool. I’m definitely of the mind that nobody can solve all of the problems. But somebody can provide building blocks. And your building blocks are only as good as the problems that get solved with them. So, if we’re solving all these interesting problems, it means we built good building blocks, which is rewarding.

Mark

Here is another question from Catherine. Are there any plans down the road for Biml books?

Scott

Absolutely. We have a couple of books actually under development right now. We have a stable of authors that we have recruited. Actually, it wasn’t much of a recruiting effort. We were originally thinking about doing just one book. And we went and approached a list of authors because we wanted it to be sort of a community collaborative sort of thing where we had a recognizable author writing each chapter. And we had a list of authors that we wanted to approach and we made the list a lot longer than we thought it needed to be because we thought maybe half or two thirds of them would say No. The funny part is that when we went and approached them, I think every single one said Yes. So, we had more authors than we were originally intending. The solution there was to go ahead and do two books. We’re still in the very early stages. It’s going to take a while; books take a long time to get out the door. We are going to have one book whose working title is “Biml: The Definitive Reference” which will be an end to end resource to learn everything you need to know to be effective with Biml. It will have reference material and conceptual descriptions as well. And then the other one is a Biml cookbook. So the chapters will be devoted to specific problems you need to solve then options for different patterns that you could use. For example, we might have a chapter on Unit Testing with all sort of different recipes that you could use to do Unit Testing very effectively with Biml. Or another chapter on different patterns like doing a Type I, Type II slowly changing dimensions, etc, in Biml and various different options for doing that. So that, I think, is going to be great. It’s going to cover both ends of it. For the people that just like to sit down and become and experts on stuff, we’re going to have an option there. And for people that prefer to wait until they are confronted with a problem and then go read specifically about that, we’re going to have an option there, too. Unfortunately, we’re not announcing a timeframe on that yet because we don’t want to get it wrong and we’re still a little too early to know exactly what the ship date is going to be. But, it’s definitely something that is an active area of work for us.

Mark

Catherine helped with this question, as well. So, right now, there are members of the community just stepping up to do presentations. And sometimes those presentations happen in places that might not be feasible for you guys to get to because cost, and whatever else it may be, and there’s a limited number of people you have here at Varigence. Are there any plans to have kind of a Train the Trainer or some sort of certification program to say, “This person is a certified Biml Trainer” and give them access to stuff. I’m thinking of the Microsoft Certified Trainer. And obviously that’s very robust thing. But are there plans for something like that?

Scott

Yeah. Absolutely. And in fact, we’re past the planning stage on that. We actually have something in Production right now on that. But, first just a note about the Community talks. A lot of folks aren’t aware of this, and hopefully we’re going to start doing a better job of publicizing these talks that are happening worldwide, but if you look at just the past six months, so January 1 thru June 30, worldwide, we have had 84 talks across 15 countries with 22 distinct speakers. And that’s incredibly rewarding and I just want to give a huge Thank You to the Community. That’s something where there is absolutely no way we could get that amount of reach on our own. And these are people who don’t work for Varigence; they do it because they love the technology and they love telling the story to others. So, I think we’ve already got something great there. Of course, your one-hour Community event isn’t a replacement for training where you’re actually able to go on site and say, “Here’s your full training program” where, at the end of it, you’ve got the expectation that the Team’s just going to hit the ground running and start working directly. So, for that, we work through partners. We do have a training program we can offer directly, but we prefer to work through partners as often as possible because we don’t think there is any way, going back to some previous conversations, we can be as effective training somebody in say, the Healthcare vertical, as a professional services company that spends all their time in Healthcare. We would rather say, “Let’s get you set up on Biml” instead of saying “Let’s get our trainers able to talk Healthcare.” So, we already have a program; we have a Consulting Partnership program. If there are consulting companies out there that are interested in being able to work directly with Biml and actually train with it as well get in contact with us. We have a Train the Trainer program and a whole bunch of materials that are out there. So, essentially, we have pre-canned offerings that you can start with and then tweak. We have an hour long, two hours long, half day, full day, a three-day and full week sessions. So, we’ve got all of those pre-canned and of course you can tweak them and you can supplement. They’re module based, so you can add in your own modules. You can tweak to use your own branding. You can tweak them to change some of the messaging if there are particular aspects that you know would be more or less interesting to your particular client or your particular industry vertical. So, we’ve already got all that set up and if anybody’s interested in engaging on that, please don’t hesitate to contact us.

Wrapping Up

That takes care of the interview with Scott. I have to thank Scott very much for his time for this and the time he spent showing me Biml and Mist. And I need to thank Catherine and Samuel for their help in coming up with appropriate questions for this interview. Overall, I was immensely impressed with the amazing work that Varigence, with Scott’s leadership and vision, have done with Biml and Mist. I am now planning how I can roll time for learning Biml into my schedule around everything else going on. I am sure I can justify the ROI.

I hope this helps shed some light on Biml and where it is headed.

PASS Summit Interview With Kamal Hathi

For the third, and final, installment in my PASS Summit Interview series, I present my interview with Kamal Hathi, Director of Program Management for Business Intelligence Tools at Microsoft. Kamal is the one who is ultimately responsible for the direction of Microsoft BI.

As with my other interviews, the byproducts of casual conversation have been edited out for better flow in writing.

Transcript

Mark V:

Things are changing very rapidly in the BI space. There are so many tools coming out and Microsoft is delivering so many awesome new features. As a BI Developer, what does my job look like 5 years down the road?

Kamal:

That’s a great question. I think we’ve been on a journey where BI was something central that somebody built and everybody sort of just clicked and used, or maybe had a report. Now, we’re coming to a point where a lot of it is empowered by the business. The business guy can go off and do lots of things. But there’s two parts that are interesting that a Developer, a Professional, if you will, needs to be in the middle of. One, I think, is the data. Where does the data come from? The first requirement is having the right data. And call it what you want, Data Stewardship, Data Sanctioning. And it’s not just Data meaning rows and columns or key-value pair kinds of things. I think there’s Data, as in the Data Model; building a Data Model. And a Data Model, sometimes, is not different than what people do building Cubes. They have Hierarchies, and they have Calculations, and they have interesting drill paths. All kinds of things. So, someone has to go do that work, many times. Even though, I think, end users can go and mash up data themselves. But there are times when you need a more supervisory nature of work; or a more complicated nature, if it turns out that Calculations are difficult, or whatever. Someone’s going to always do THAT work. And that’s interesting. The second piece that is Interesting is that I think there’s going to be a new work flow. We’re already seeing this pattern in many places. The workflow is going to be like this. An End User decides they want to build a solution. And they go off and, essentially, put something together that’s very close to what they want. But it’s not really high performance; it’s not perfect. Maybe it’s got some missing things. And they use it. And their peers use it. It gets popular. And then some Developer comes in and says, “Let me take that over.” And then they improve it, make it better, polish it up. Maybe completely re-write it. But they have the right requirements built in right there. A third piece you will start seeing is going to be the Cloud services based world. That is taking bits and pieces of finished Services, and composing then together to build solutions. And that’s something we don’t see much today. But I can imagine someone saying, “Hey, I’ve got this Power BI piece which gives my visualization, or some way to interact. I can take some part of it and plug it in to another solution. “ They can provide a vertical solution, provide a customized thing for whatever audience it is. And be able to do so. I think those kinds of things will be much more likely than doing the whole thing.

Mark V:

So, instead of necessarily the BI Developer working with the users to help map out requirements, with these End User tools, they can build something that’s their requirement.

Kamal:

More or less. And then the Developer can jump in and make it better. Maybe re-write better expressions and queries and make it faster; all kinds of interesting things. It just adds more value to their job. Instead of sitting there talking to people and getting “Oh, this is wrong. Re-do it.”

Mark V:

When Tabular came out, there was some uproar. When a new feature comes out, there are always “the sky is falling” people that say that something else must be going away. It happened when Power View came out. People said, “Report Builder is going away.” Then, when Tabular came out, “Oh! Multidimensional is going away.” I still hear it out there sometimes that Multidimensional is dead or dying; MDX is dead or dying. What message do you have for those people?

Kamal:

Two things here. MDX and Multidimensional are not the same thing. We should be very careful. Multidimensional is very important. Lots of customers use it. And we’re actually making improvements to it. This latest thing, DAX over MD, which allowed Power View to work over Multidimensional, is a great example. We know this is important. There are many customers who have very large scale systems in Multidimensional. And it’s important to us. We’ve just come to a point with Multidimensional where large jumps in functionality are just harder to do. You can really go on an say we can 10X anything. And so the In-memory stuff, the Tabular, has been the approach we’ve taken to give more kinds of scenarios; the more malleable, flexible stories, the “no schema design up front” kind of approach. But Multidimensional is super important. It isn’t going anywhere. So, when someone asks, “What should I use?” we say, “You pick.” Our tools, our aim is, should work on top of either. We would like Tabular to be on parity with Multidimensional in terms of capabilities. And we’re making progress. We’re getting there. We haven’t quite gotten there. But uses shouldn’t have to worry, in our opinion, about Multidimensional or Tabular. These should be things that you worry about as a tuning parameter, but you shouldn’t have to worry about them as a big choice.

Mark V:

So, the Business shouldn’t be concerned about it.

Kamal:

Right. It’s a store. It’s a model. It’s a calculation engine. And you might want to switch one to the other. And in the future, that could be a possibility. There are lots of limitations that might not make that practically possible, but notionally speaking. I’m not saying you could just pull a switch. But you should be able to have similar capabilities and then decide which one to use.

Mark V:

Or possibly some kind of migration tool?

Kamal:

Yeah, but those kinds of things are harder sometimes. They’re not so easy to do because.. who knows what’s involved? What kind of calculations did you write? Etc. Those are much harder to do. Migration is always hard. But comparable capabilities make a lot more sense. So, I can take this guy, and build the same thing and not have to worry about a dead end.

Mark V:

When I was at TechEd, Kay Unkroth did a great Managed BI session. And he started with a demo of taking in data from the Internet and combining it with some business logic. It was tracking the purchasing habits of Pink Panthers buying cell phones. And in his scenario, they went through a big investment at this Company only to find out that Pink Panthers don’t exist. So, in the realm we have, with data becoming more self-serve, with Power Query, etc, being able to reach out to more and more places, what is the thinking [from Microsoft] on the ways we can continue to have some governance over the realm that users have?

Kamal:

Fantastic. This question goes back to the first question on what Developers do. We talked about Data Stewards and Sanctioned Data and all that. And even with Power Query, if you work with that, the catalog you find things from isn’t just the Internet. The Internet is one catalog. We are also enabling a Corporate, internal catalog, which today, we showed in the keynote. And you saw what we can do. It’s in Power BI, and you can actually try it out. And the goal there is to find a way for someone, it could be a business user, maybe a Developer, the IT professional, to go off and add to a catalog that is internal to the company. They can add something they think is trustworthy and worth sharing. And then someone else can come in. And they want the shopping habits of certain constituencies or a certain segment, they can find it. As opposed to “I heard on YouTube that 20-inch phones are now hot.” Who knows, right? Just because it’s on the Internet, doesn’t mean it’s true. That’s the idea of having this info catalog, essentially. It knows how to provide people the capability of publishing. And that can be a mechanism for whoever one deems fit in the process to provide that sanctioned publishing. And maybe have a data curator look at things and make sure users have a place they can go for trusted sources as opposed to “wide open.” And we actually enable that. People can do that.

Mark V:

Could you then disable the Internet piece of that catalog in certain organizations?

Kamal:

Potentially. The question is Why? And, again, it’s a process thing. You ask people Why and what they want to do. And that’s the danger of governance. The minute you make something such that it becomes that Forbidden Fruit, someone will find a way of doing it. They’ll go out and Cut and Paste. They’ll do something. It’s almost impossible to really lock these things down. What you do want is people to be aware that there’s an alternate, a sanctioned place, where they can go compare and they can understand. And that’s what they should be doing. I think it’s much harder to lock it down and say you can only get data from inside. But that’s a process view; and organization will decide how they want to do it.

Mark V:

Early in my career, I built several report models for clients. Obviously, you know, they’re going away. They’re not be invested in. I recently wrote a Technical Article about alternatives to Report Models in SQL 2012. And I laid out the different options that you have, from Power Pivot for Excel, Power Pivot for SharePoint, Tabular, or full on Multidimensional, etc. One of the things that was pointed out to me after that one of the things that goes away with report models is the ability to make that regular detail style table report in Reporting Services that can go against a drag and drop model that’s not a Pivot table. Is there something coming down the road that could alleviate that?

Kamal:

There are ranges of this, right? You can do a drag and drop tablix, even, in Power View. So, it’s not like it’s completely gone.

Mark V:

Right. It’s not completely eliminated. But it’s not the flexibly of a Report that you can do subscriptions with, etc.

Kamal:

I think in terms of those kinds of things, like Subscriptions, mailings… Two things. One. Reporting Services is still there. It’s not like it’s gone away. And number two, we obviously think it’s important to have the same capabilities in other places and so we’re looking into making that possible. A lot of people ask for it. And it seems like a reasonable thing to ask for. Why would I not want to go schedule even my Excel sheets? A report’s a report. For some people a report is a Reporting Services report. For some people a report’s an Excel report. It’s an important point and certainly we’re looking into that as something we would do. But I don’t know When, Why, How, Where. As with all things Microsoft, it’s open to interpretation by whichever tea leaf you read about.

Mark V:

From the visibility you have within Microsoft to see what’s going on, could you describe, in a general sense, how a feature for, say, Analysis Services, goes from ideation, with someone saying, “Hey, wouldn’t this be cool?” to making it into the product?

Kamal:

Absolutely. There are multiple ways this happens. One is when we had an idea before or a customer had asked for it, and we decided to ship it and couldn’t. And then it becomes a “backlog” feature. So, next time, when the next version comes, this one is left over and we say, “Hey, come join the train.” We just pull them back on. That’s easy. Done. Everyone knew we wanted, or tons of people had asked for it, and we just couldn’t finish it, so we just put it over. The second, which is much more likely, is customers have been asking for something. For example, we heard the feedback loud and clear, “We need support for Power View of Multidimensional models.” That’s a pretty clear ask. The work is not easy, but it’s a pretty clear ask. So then you go and say, “What does it take?” We figure out the details, the design, the technical part of it. And then you figure out the release timeframe, the testing. All that stuff. And you do it. The third one is when somebody say, “I have an idea.” For example, “I want to do something. I have an idea for an in-memory engine and it could be like a Power Pivot.” And that’s just like, “Wow! Where’d that come from?” And then you say, “Will it work?” And nobody knows, right? So then we start going in an asking customers what they think. Or, more likely, that idea had come from talking to somebody. And this Power Pivot case actually came from talking to customers that said, “Hey, we want end users to be more empowered to do analysis themselves. What can you do?” That was essentially the germination of that idea. When that happens, there’s usually some customer input, some personal input. And then they start to come closer to fleshing it out and ask who’s going to use it. So we ask customers, get some replies. Sometimes we just do a design on paper and say, “Would you use this?” And then people try it out, give it some feedback, and say, “Yeah. That’s good.” So, we go down this path getting more and more input. And then we come up with a real product around it. Sometimes, we have an idea that we think is great and we float it around and we hear vocal feedback saying that’s a bad idea. And we go back to the drawing board and re-do it. And this happens all the time. Many of these features show up, and people don’t realize that we went back and re-did it because the community told us. And many people ask if we use focus groups here [at PASS Summit]. There’s one tomorrow, actually, to go sit down and say, “What do you guys think about XYZ?” There’s feedback; and we listen. So, there’s an idea, there’s feedback, there’s evolution. And we iterate over it all the time until we come to a solution on that. Very rarely to we just build something where we didn’t ask anyone and we just did it. We come up with a brainstorm and flesh it out, but we always get feedback.

Mark V:

So, Matt Masson has discussed the debate within the SSIS team regarding the Single Package deployment model versus the new Project Deployment model. I would assume that happens throughout all the products. Maybe there’s somebody championing this feature and people asking “What are we going to do?”

Kamal:

Yeah. My first memory of working on this team is people shouting in hallways… in a friendly way. People shouting, “Hey! What about THIS?” or “What about THAT?” And it turns out to be a very vocal and a very passionate environment. People don’t just work because they walk in in the morning and write code. They work on this because they are just in love with this product. They are just deeply, deeply involved and committed to what they’re developing. And these people just have opinions. And I don’t mean opinions like “Yeah, I want to have a hamburger.” They have an OPINION. And there are very passionate discussions that go back and forth and back and forth sometimes. Typically what happens is a Senior Architect or someone who has some tie-breaking capability can come in and say, “Look, great ideas everybody, but we’re going to do THIS.” And then people listen, there’s some more argument, and after that, it’s done. And we go and do it. And Power Pivot is a great example of that. People were like, “Are you crazy? What are you doing??” And it was like, “No. We’re going to do it.” And that was it. And the team just rallied behind it and built a great product, and off we go. But, the good part about that story is that, because people have such vocal opinions, they rarely remain silent about these things. We don’t lose anything. And so we have an answer that we can listen to. And we can decide from multiple options as opposed to just going down one path. And then we end up with a decent, rigorously debated solution.

Mark V:

So, there’s a lot going on right now with Cloud and Hybrid solutions; there’s Chuck Heinzelman’s great white paper on building BI solutions in the Cloud in Azure virtual machines. When I talk about that with clients or with colleagues, there’s still a lot of trepidation and people say, “For some industries, that will never ever happen.” What kind of message would you have for people that just don’t see how it’s feasible?

Kamal:

Cloud has many aspects to it. There is the “Everything’s in the Cloud. I want to put my financial, regulatory data in the cloud.” Which, is not going to happen for some people. Then there is the “I can get value form the Cloud on a temporary or on-demand basis. I want to go burst capacity. I want to go off and do a backup.” Whatever it is. There’s that. Then there is the “I’m going to do some parts of my workload in the cloud and the rest will remain on premise.” And for all these options, whether it’s completely in, hybrid, temporary bursting, that value provided is typically what customers decided makes sense for them. If it’s the value of all the infrastructure just being taken care of and I can do more things to add value to a solution, so be it. If it happens to be that I can get extra capacity when I need it, great. If it happens that I can keep my mission critical or compliance related data on premise and lock it up, and hybridly work with the cloud, that also works. And for most customers, most users, there is value in many of these scenarios. And there isn’t any one answer. You can just say that “The Cloud” means that you do X. it just means that you have many options. And interestingly, Microsoft provides many options. We have Platform [PAAS], fully deployed platforms that you pretty much just deploy your database and have to do nothing. Azure SQL Database, good example. All you do is worry about building your database, setting your tables, and you’re done. We take care of all kinds of things in the background. Don’t like that? Go to a VM. And you can do all kinds of fancy things. Chuck’s paper is a great example. We have solutions. Office 365 gives you end-to-end; from data to interfaces to management, all in one. Each of these things have hybrid solutions. You can have data on premise. As we saw today in the keynote, you can backup to Azure. With 365 and Power BI, you can actually do hybrid data connectivity. So, all of these things work in different ways. I think, for every customer out there, it typically is just a trial solution, they want show something, or they want to have part of their solution that works. Either way, it adds value. Typically, what I have found as I talk to customers, is that many of them come from the “I would never do that” to “Oh. I might try that.” Because they have come to the realization that it’s not an all or nothing proposition. And you can do parts, try it out, and it works.

Mark V:

Over the Summer, I did a proof of concept for a client using Tabular. Just because, with the size of it, and looking at its characteristics, I said, “This would be a great Tabular solution. Let me demo it for you.” I have talked to several people in the industry. And the process of developing Tabular in SQL Server Data Tools can be… less than awesome. I had some bumps in the road. There was a great add-in I got from CodePlex [DAX Editor] that helped me deal with DAX in more of a Visual Studio environment. It didn’t apply well to Service Pack 1 [of SSAS 2012 Tabular] and that kind of stuff. There was something that Marco Russo had put forth on Connect that suggested more or less a DDL language for Tabular models. Is something like that feasible?

Kamal:

I don’t know. I don’t have a good answer for that. That reason for that is we’re looking at many options. What would move the ball forward in that direction for a design environment for Tabular models? And there are many options. So would say let’s do it in Excel; take Power Pivot and put it on steroids. It’s a possibility. Or a DDL language. Go off and take the things you had in MOLAP and apply it here, maybe. Maybe something brand new. I don’t know. We’re trying to figure out what that is. I do know that we do want to take care of people who do large scale models, complex models in this environment. I just don’t know how and where and when. But it’s an important constituency, an important set of customers. And we’ll figure out how to do it.

Mark V:

As a BI developer, it’s important to know that the discussion’s being had.

Kamal:

All the time. This happens in a lot of hallway discussions. This is one of those.

Wrapping Up

There’s a little bit of a story to this one. When I decided I wanted to do a blog series composed of interviews conducted with Microsoft folks at the Summit, I wanted to get different perspectives. With Matt Masson (Blog|Twitter) and Kasper de Jonge (Blog|Twitter), I already had members of teams in the trenches of development of the tools. I had then reached out to the awesome Cindy Gross (Blog|Twitter) to get the perspective of someone on the CAT (Customer Advisory Team). Cindy got back to me with a contact for Microsoft PR, Erin Olson, saying that she was told to send me there. Upon contacting Erin, she responded by offering to have me sit down with Kamal Hathi, who would already be on site that day. That was an offer I couldn’t refuse. In hindsight, I am wishing I had asked about sitting down with Cindy as well, but I had already decided that my first series of this sort would be capped at 3 since I had never attempted anything like this before and didn’t know what to expect. If this series proves to be popular and of value to the Community, then I will certainly consider doing it again and asking Cindy to participate.

You will notice some overlap in the questions posed to my fantastic interviewees, particularly between Kasper and Kamal. I wanted to get different perspectives from within Microsoft on some similar topics. I also made sure to branch out in each interview and ask some questions targeted to a particular person.

In response to my “5 years down the road question,” Kamal echoed the importance of Data Stewardship. It is clear that this is an area that Microsoft is taking very seriously. Having done a lot of reporting in my career, my motto has always been, “It HAS to be right.” Clients have appreciated that. As we open up more and more avenues for users to get data, we must keep in mind that the data needs to be trustworthy. 

I really want to highlight the ways in which Kamal described how a feature makes it into the product. Make special note of the fact that customer feedback is vitally important to Microsoft. Sometimes, the idea itself comes from Customers. I think Microsoft often gets a bad wrap as some kind of bully or something merely because it is big. It is certainly not perfect; no company is. But I think it is really important to make note of how Microsoft DOES listen to customer feedback when it comes to the products they provide.

Kamal’s description of the internal debates that occur within Microsoft over features is important. It also echoes what we heard from Matt and Kasper. The people working on these products for us care VERY deeply about what they are doing. The work and passion that go into creating these tools we use every day is staggering to me. While I have never been a “fan boy” of any company, I have chosen the SQL Server related technologies upon which to base my career. And I have no regrets. This is a hugely exciting time to be a BI professional. The investments that Microsoft have been making in this space over the past several years make it even better.

This concludes my PASS Summit Interview series. Thanks so much to Matt Masson, Kasper de Jonge, and Kamal Hathi for taking time out of their very busy schedules to sit down with me and answer my questions. Thanks also to Cindy Gross and Erin Olson for their assistance in connecting me with Kamal. This series turned out even better than I had ever expected thanks to the generosity of those involved.

PASS Summit Interview With Kasper de Jonge

I continue on with my Interview series with Analysis Services Program Manager Kasper de Jonge (Blog|Twitter). As before, some edits were made, with Kasper’s permission, to eliminate byproducts of casual conversation and make things flow better in writing.

Transcript

Mark V:

How would you say my job as an SSAS developer would be different in five years?

Kasper:

Before I joined Microsoft, I was a developer, myself. I developed Analysis Services Cubes and SSRS reports on top of them. And they never seemed to work very well together. One of the things I have seen over the years, since I joined Microsoft, is the Teams started working together better, much better. So, teams like Power View and Analysis Services are coming together in releases, and now Power Query and the Data Steward experience join the mix. But I think that is one of the key aspects going forward.

I have been trying to sell MS BI before joining Microsoft, and it was hard. What do you need if you want to buy MS BI? You need Excel, so you need an Office license key, you need SharePoint, you need Analysis Services, you need Enterprise Edition, or BI Edition now, luckily we have that. So, you need to sell four different products. Now you can just say, we have one product: Power BI.

It’s gradually going. Power Query is still a little bit separate. The M language is there, then there’s the DAX language, and what do you do where? But at least we’re landing. The first thing we said two years ago was that there’s only going to be one model. And that’s the Analysis Services model. In the past, Reporting Services had their own model, right? The SMDLs [Semantic Model Definition Language]. Performance Point had their own models. They all had their own stuff. So we said, “No More. There’s only going to be one model, and that’s going to be Analysis Services.” That’s already a big step. You see people like Power Map come into the picture. The initial versions that were not public were not really connected to our stuff. We sat down together and said, “Let’s be sure we all do the same thing.” So, if you go into Power Pivot, and you say this column is a country, tag it as a country, not only can Power View use it, but Power Map will now use it as well. I think that’s one of the biggest benefits and it was really needed: to make one product, and make them work much better together.

Mark V:

Do you see big changes in the skills of people like myself, not an Information Worker, but someone who sets up the environments in which the Information Workers play?

Kasper:

I don’t really think so. I think the role is going to change a little. And that’s not necessarily to say that you’re going to have to do different things. But in the recent years, as there’s less IT, more cutbacks in IT, you have to do more things in less time. So, enabling the Business User is becoming more and more important. And not just by giving them canned reports, but by giving them better models, which we already did with Multidimensional Models for years. But make it even easier, and that means making good models in either multidimensional or Tabular, and have a good analytical platform on top of that. So, that’s one kind of user who only wants to do template reports or ad hoc visualization on top of models. That kind of stays the same, I think. I do hope that with Tabular models, it’s becoming easier to do shorter iterations, and we can grow the Tabular model over time and make it easier to use and make it easier to do larger things. For example, I have seen people that have six to seven hundred measures in their Tabular model. And that’s pretty hard to maintain. So, we need to come up with stuff to make that easier. I met someone yesterday that had 120 tables and five hundred measures. Well, right now, we don’t have a great experience for you to build and manage that. So we need to think about what that means. It’s more about how the tools change. I’m a PM [Program Manager] who works on the tools side of things. So, that is one aspect of the BI Pro as we know them today.

On the other side of things, with data movement, as Matt Masson was showing earlier today, you can expose data for your users to start using inside Power Query. And you can enable data steward to start creating data. So, you, as IT, are not necessarily building it, but you are starting to enable people. And I remember, back in the day, when I was building cubes myself, I built an application in .Net that allowed business users to add data to the data warehouse. Master Data Services does it now pretty well. So, the two types of Business Users, one being the user that just wants to do reporting, doesn’t want to do any modeling themselves or any calculations. So, that’s one. The other is the actual Power Pivot/Power Query user and we can help them get to the right data easily and make them confident that the data is right. And that’s an important venue. And I think that’s also an important part BI pros have been doing for years. They can shift a little bit into that mindset, and enable that as well.

Mark V:

From a tools perspective, one of the questions I have around enabling the end user to get more and more data, including data directly from the Internet. One of the things you talked about is the experience for the data steward with Master Data Services. Is there discussion around a solution that allows users to get data from the internet, but only so much. Kay Unkroth, at TechEd, did a great session around Managed BI. In that session, a fictitious company tracked the purchasing habits of Pink Panthers. And it wasn’t until a large investment had been made that someone realized, “Oh no. Pink Panthers aren’t real.” So, the experience of getting to more data. But how do we make sure it’s good?

Kasper:

There are definitely discussion about all of that. And you already see it a little bit in the portals. If you saw Matt Masson’s session today, you saw that you can track how many times different data has been used, and by whom. And we have that On-Prem today. And, in my mind, that is one of the most popular things. To allow you to understand what the data means. And I sincerely hope, and I am not sure if this is coming, but things like Data Lineage would make a lot of sense in here as well. I don’t know if you’re familiar with Prodiance? That’s something that the Excel team has. And it’s already released in Excel 2013. And it allows them to do, sort of, Excel spreadsheet lineage focused on the financial markets. I don’t know if you remember, this was a few years ago, and someone made an error in some calculation in an Excel spreadsheet and they lost a few billion dollars. So now all banks, etc, are saying, “OK. We need to manage this.” So they [Excel Team] have a product they bought, I think two years ago, called Prodiance. And it’s now available inside Excel. They only discover Excel workbooks for now and they don’t know anything about data models and everything that goes into that. So, it would be great if we could “hook that up” for example. I’m not saying that we’re doing that. But that’s something that would make sense.

Mark V:

So, with the way that Office and Analysis Services are dovetailing more, like in Power BI, is there sometimes contention between the teams?

Kasper:

No. The Office team loves what we’re doing. We’re adding value to Office. We’re giving them all kinds of new features. And we’re innovating in the BI space. And they love that. They do give us some hints and tips on what they want to see and we try to accommodate that. It’s more like working together. Our directors are working together and they see what is needed and say, “How do we work together on doing this?” We all see we’re working together in the Excel code base. But what do you think about Power BI? It’s one completely shared code base. You have Office 365, SharePoint Online, all the infrastructure. It’s one big surface that lives and breathes together. So, it’s a lot of working together.

Mark V:

That has to be pretty exciting.

Kasper:

Yes. I mean, it’s a big company. The Office team has its own building. It’s a little bit different. Each team has its own rules, and how it works, and it’s different. Office has a longer planning period. We don’t have a long planning period. In the past, we also had different shipping vehicles. Now this is more streamlined.

Mark V:

So, with the evolution of Analysis Services to feature both the Multidimensional and now the Tabular model, I encounter people who say, or have heard others say, “Multidimensional is dying” and “Don’t bother learning MDX because it’s not going to matter anymore” and so on. What kind of message would you have for those people?

Kasper:

My next session, in an hour, is about all the investments that we made in multidimensional that allow you to do Power View over Cubes. And that was not easy improvement. So, we now support DAX queries on top of Multidimensional Cubes. That is some major major work that has happened. We’re saying, now you have all the good stuff with Power View. And whenever Power View does something going forward: you will get it. Automatically. So, it’s definitely not that. Having said that, it’s still a hard decision on when to go for what. Multidimensional is just a much more mature product. It’s been in the market for so long. People have worked with it for all these years. With Multidimensional, we’ve seen all these different usage types.  We’ve seen the Yahoo cubes, the huge ones, the small ones, we’ve seen people do Writeback, and all those kinds of things. So, it’s been around the block. Tabular has not been around the block for long. It just started the journey. So, we’ll see where that ends up. I’ve heard some feedback from people here as well. They did multidimensional cubes and they started Tabular and said, “Well, it’s just great because it makes it so easy and makes it so fast to build something.” But it doesn’t have certain features. That’s for sure. Calculated Members would make my life so much easier. I wouldn’t have to do 400 measures. If I have Calculated Members, I could just have a few Calculated Members, and I’m done. I don’t have to do YTD for this measure, and this measure, and this measure. And when I do custom rollups, you can’t do it in Tabular. There’s just some things in Tabular that you cannot do yet. For example, Hierarchies. Get me the Parent of something. In Multidimensional, is makes sense because you have Attribute Relationships and you have Hierarchy structures. In Tabular, we don’t. We just have Tables. We have Hierarchies there, but hierarchies are more an “ease of use” feature instead of a structural feature, like it is in Multidimensional. So, there’s just a lot of things that haven’t made it. We don’t know if we want to bring that in to Tabular. So, it’s not that, that’s for sure.

Mark V:

Multidimensional is not going away.

Kasper:

No. It’s certainly not going away.

Mark V:

So, with MDX being as complicated as it is, and even though it would take years to get really good at MDX, is it still worthwhile path to go down since there is still so much multidimensional out there?

Kasper:

Yes.

Mark V:

And there are still so many use case for Multidimensional, even with Tabular.

Kasper:

And Excel still talks MDX even to Tabular. There are so many tools out there that talk MDX. But, having said that, I’ve heard a lot of people here that said, “I’ve migrated a lot of Multidimensional Cubes to Tabular Cubes. It makes my life so much easier.” So, I’m not sure I can give an answer. But, I think you can get away with just learning the basics of MDX. Or learning the basics of both. Because, I think, that’s probably what you’re going to need. You probably think about, “What do I need to become an expert in?” I’m not sure what the answer is.

Mark V:

It’s kind of tough. That’s the position I’m in, personally. I’ve done a little MDX. I have a blog series and stuff like that; went really well. And I’m like, “Well, do I dive deeper into that? Do I do something similar for DAX?”

Kasper:

It kind of depends on the situation you’re in, I would think. If you have the opportunity to push Tabular, it fits much more into the Agile world. I mean, it’s so much easier to make some changes. But, if you’re customer demands are not Agile, if they want to stick to the old world methods, then Multidimensional is probably preferred, I would think.

Mark V:

So, having been on the [Analysis Services] team for a few years, are there features of Tabular, of Power Pivot, or anything that you championed and are really proud of? Anything where you’re like “Hey, I stood up for this, it’s in the product, and I’m really pumped?”

Kasper:

It’s so much of the little things. I have business, myself, with everything. Thinks like this particular DAX function; I need to make sure this works correctly. All the small things like Sort by other Column; making sure that came in.

Mark V:

I love that, by the way.

Kasper:

It’s so many of those little things that make the product complete.

Mark V:

I did a POC for a client using Tabular because it’s really a good fit and it was kind of a cool solution. One of the things I found when I was working on it was that, working within SQL Server Data Tools…. It’s not “awesome.” You can do it. You’ve seen some of my Tweets about changing Column names and things of that nature. There was a great tool that Cathy Dumas had written and put on Codeplex.

Kasper:

The DAX Editor one?

Mark V:

Yeah. The DAX Editor. Are there any thoughts to maybe upgrading that? Because, even though it was not fully compatible with [SSAS Tabular] Service Pack 1, and it had “issues,” it was awesome enough, that I used it anyway.

Kasper:

That was a personal prototype, together with someone else. I cannot speak for that person.

Mark V:

OK. But something like that. Writing DAX in THAT environment, even with it not working perfectly, was awesome.

Kasper:

I know. I get that. They found a quicker way to do it. Of course it was Codeplex, so it was not officially supported. But with SP1, a lot of things changed in the model, so it [DAX Editor] broke.

But, I totally get it. I really sincerely hope we can come up with a better example in the product. I’m not saying that we’re doing it right now, but definitely would love to do something like that. This is part of what I was saying about having larger models. In Excel, it’s a different view point. If you are in Excel, you work to solve “a” problem, and then you throw it away. In a Tabular model, as a BI Developer, you have a solve a problem for 40 people. So, you need to look at it from all different angles, and different viewpoints. So, it’s bigger and more complex. So, you need bigger and better tools, and not just the Measure Grid.

Kasper:

One of the other examples of teams working together, and we almost had this on the Keynote: did you know you could have Excel 2013 with Power View and query Hadoop, with no caching, with our existing products today? I mean, this is awesome; it’s teams working together again. Excel 2013 Power View connects to a Tabular model in Direct Query mode. The Tabular model in Direct Query mode connects to PDW [Parallel Data Warehouse]. That sends Polybase queries directly to Hadoop. And we worked with the PDW team to make sure the queries that we send are supported in Polybase. So that they understand the queries that we send. It’s not going to be as fast as putting into Vertipaq [xVelocity]. But, there’s no caching. I directly go from your Excel spreadsheet, in Power View, to data in Hadoop and you return it.

Mark V:

How long has this been supported?

Kasper:

This has been supported for quite some time. One of my colleagues is getting in line to write a blog post about it. He still hasn’t done it. This is one of those things where, before we say anything is “supported,” we have to test it. And that costs money, right? So, that take money away from a Power BI feature or anything like that. But, in this case, we thought, “OK. This is going to be so cool!” And you can imagine, PDW just started going down this path. But, I can imagine, this will become faster in the future. So, this is going to be awesome. 

Wrapping Up

I really liked hearing how teams within Microsoft are working together. Kasper has a great point regarding traditional Microsoft BI requiring you to purchase several different products. Power BI really tosses that model on its head. if Microsoft really wants to democratize BI and bring it to the masses, the simplification of the process is a key step.

I have to confess that I had never heard of Prodiance until Kasper mentioned it. That sounds like some cool functionality that I will want to play with.

It seems that when new technologies come out, there always has to be people that say that some other technology must be dying in consequence. When Power View came out, there were people that decided Report Builder would go away. When Tabular came out, people panicked that Multidimensional must be going away. the sky is always falling, isn’t it? When Kasper made his point about the work that went into having Multidimensional Cubes support Power View, it made a lot of sense. Why would Microsoft invest time and effort in such a difficult task just to sunset Multidimensional soon after? That would make no sense. Kasper was pretty clear: Multidimensional is going to be around a while. As will MDX.

I really like Kasper’s point about Tabular being more in line with the more in tune with the Agile development cycles of today. It is a lot easier to make iterative changes to Tabular than it is in Multidimensional. At the same time, his point about Tabular not having been around the block yet is a great one. There were cool aspects to my choice of Tabular for a client project last year. There were also a few surprises that I had to deal with. I look forward to getting strong expertise with it so that I am in a better position to work around difficulties and take better advantage of new features when they come out. I was heartened by the fact that Kasper saw where I was coming from with a better environment for DAX development. Hopefully, there is more support for that within the team.

Kasper’s example of using Tabular in Direct Query mode hitting PDW is a great example of the future I would like to work in. Taking disparate technologies and putting them together to make a cool solution is just a blast.

Thanks so much to Kasper de Jonge for taking time out of his busy schedule (I think he presented 4 sessions at Summit) to sit down with me. My final interview post, with Director of Program Management for Microsoft BI Kamal Hathi, should come next week.

PASS Summit 2013 Interview with Matt Masson

At PASS Summit 2013 in Charlotte, I had the opportunity to sit down with Matt Masson (Blog|Twitter), Senior Program Manager on the Integration Services Team at Microsoft. I was really honored when Matt explained how busy his week was and then offered me a half hour anyway. I want to give a tremendous THANK YOU to Matt for being so generous with his time.

I had no grand plan/agenda for my series of interviews of Microsoft folk at PASS Summit 2013. As such, I plan to just display the transcript of my conversation with Matt as it occurred. NOTE: With Matt’s permission, I have edited out “Um” and “Ah” and other byproducts of casual conversation so that it flows better in writing.

The Transcript

Mark V

With the way things are going, with Cloud, and everything else going on, what does the future of SSIS development look like 5 years down the road?

Matt

I think five years out is a bit too far. We’re seeing a lot of big changes, especially around Hadoop. I think Hadoop and Big Data processing have been a big disruptor to the ETL space. I think there’s still a lot of what we call “traditional” ETL work, what people do today with SSIS. That’s where SSIS’ strength is. But we’re getting more and more requests about Cloud processing. That’s actually one of the things I’m going to talk about at PASS today, at the SSIS Roadmap session. One of the interesting things is, say, go back two or three years ago, we had people asking, “Can I have SSIS running in the cloud? Can you make SSIS run in the cloud?” And we’re like, “Yeah, that’s a great idea. Let’s go build it.” And then we started asking, “What scenarios?” and “Why do you want to run SSIS in the cloud?” Customers didn’t know. OK. Where’s your data? Data is all on prem. If your data’s all on prem, running in the cloud doesn’t necessarily make sense, right? I think, as we’re seeing a shift of more and more data to cloud sources, so they’re landing in places like Azure, or even pulling in from remote sites or pulling in from different cloud providers like Salesforce.com or something like that. If your data’s already IN the cloud, then doing your ETL processing closer to that data makes a lot of sense. So, today, you can run SSIS in an Azure VM and we’re having a lot of customers do that. So, you’re using your traditional On-Prem tools. It’s just running in the Cloud.

Other things we’re considering and looking at is, basically, what if SSIS could run as a service? What if you didn’t need your VMs? You could just deploy your packages and run things like that?

In addition to traditional ETL, we’re also looking at other technologies. There’s other data movement technologies out there like Azure Data Sync, which is very simple: I want to keep my On-prem databases and my Azure databases in sync. So, you don’t need a full ETL framework. You don’t need an ETL developer. Sync just takes care of it for you automatically.

So that leads us to a couple of different angles. We’re trying to make ETL easier, more automatic. Just keep schemas in sync. While for the more advances scenarios, your traditional ETL scenarios, SSIS still makes a lot of sense. We need to evolve SSIS to better fit in the “Cloud” world.

Then there’s Big Data and Big Data processing. You’re seeing an of evolution of technologies on Hadoop, right? There’s a lot of different technologies, lots of things going on. You’re seeing lots of tools at different stages of maturity. It’s a really interesting space to see how it’s evolving. One of the things I’m going to talk about today is to show SSIS integration with HDInsight, for example. So, from SSIS, you can provision HDInsight clusters, you can run Hive jobs, Pig jobs. You basically orchestrate everything you want to do on Hadoop from SSIS. You get the nice visual experience which is lacking from Hadoop and the Big Data system today.

Mark V

So, when you think about Hadoop, and the Cloud, and the Democratization of data; bringing BI to the Masses; the revolution of Self-Serve, one of the things you have is Users looking at data that they may not know how to vet properly. So, when I think of tools like DQS (Data Quality Services) that are often integrated into ETL, what are some of the things that we could look for in the future? Not necessarily products, but just concepts for how Microsoft is going to help handle that with moving data around to enable that Self-Service, while still keeping it easy to get to.

Matt

So, Self-Service is an interesting space. We have Power Query coming out, which gives you self-service, light-weight ETL. I think our self-service vision has been resonating really well. We’re seeing more and more customers picking up on that. But, just like there’s a space for Self-Service BI, but also a need for traditional BI modelers to take that raw data into a model concept so that the “self-service” people can actually build their reports from there, I think the same thing applies in the ETL space as well. There’s Power Query for that light-weight, self-serve ETL, but there’s still the need for traditional ETL development as well for IT to automate these processes, make them reliable, do the complex transformations, apply business logic, apply filtering, etc. I think there’s going to be that “professional” or “corporate” ETL as well as self-serve ETL. That challenge for us is figuring out whether that is a single tool that does both; perhaps a single tool with different faces or personas, for different roles. I think we’re going to see a lot of convergence in our tools going forward. I think one of Microsoft’s strengths is the rapid time to results, making it as easy as possible to get it, and also have that functionality there that you can extend to do the more complex ETL scenarios as well.

Mark V

One of the other things you’re really known for is the BI Power Hour. Can you talk a little bit about how that was born and how it’s evolved and what it’s like to be a part of something like that?

Matt

Sure. The BI Power Hour is really interesting and I was nowhere near the beginning of it. I think it was Bob Baker who started the original Power Hour and it was focused around Office BI. And then the SQL folks eventually took over. But the idea was to let the Product Team have fun and show off the power of the products in your non-typical scenarios, with no business value whatsoever. And we’ve sort of made it more and more ridiculous as time goes on. There are certain teams, like Reporting Services, that have always been there since the beginning, and they always did a game. Every year they did a game. I think they did Tic Tac Toe, and then Hangman; the game got more and more complex as they went through the years. I think I saw my first Power Hour in 2009 and I immediately wanted to be a part of it. I had never seen one before and I just thought it was really exciting. And the next year, I asked the organizer, Pej Javaheri, if I could participate. He wasn’t sure; “SSIS doesn’t usually do a Power Hour” and “it’s not very interesting.” So, I decided to prove him wrong. Since Pej left Microsoft, I’ve taken over the Power Hour. I do most of the coordinating and stuff. It’s always really interesting to make sure there is a business message there. We’re not as explicit about it anymore. But, afterwards, we always have people coming up to us and saying, “I didn’t know the tools could do that” and “I want to know more.” That’s really the whole point, essentially. And if we can get laughs doing it, then that’s even better. We usually try to balance out presenters showing new technology, show off some valuable things. I typically just do ridiculous demos. I have a whole story that goes along with it. It’s a lot of fun. The hardest part is justifying the days of work that goes into a ten minute demo.

Mark V

It’s really exciting to see people who were involved in building the tools and are just so excited about features getting to go play with them.

Matt

With my demos, which usually revolve around cats, I had spent some time in SSIS and built some custom transformations. I’ve had someone ask me afterwards, “Why do you spend so much time on this? Why aren’t you doing work for the real product?” Yeah… it is a good point, but usually I limit Power Hour stuff to my “free time.” So flights, at home, things like that is usually when I work on those things. I try to really time box it, to justify to myself, devoting time to this really fun thing.

Mark V

When I saw you at TechEd and you were talking about the SSIS Catalog, one of the things you said was that there was some debate within Microsoft regarding the Package Deployment Model and the new Project Deployment Model. Even within the team, people were arguing about which way to go, and you were finally brought around to the Project Deployment Model. Is that something that is common when you are getting features ready for a product that you have that kind of debate? Is there a lot of that?

Matt

Yes, there’s a LOT of debate. The bigger the team, the more debate there is. 2012 was really interesting because that was as big as the SSIS team has really been. We actually had half our team located in Shanghai and they were really driving the Server components. And half our team located in Redmond. So, doing the coordination and making sure both teams agreed on the scenarios of what we were trying go toward was really important. Doing development is all about resource constraints, right? You have a ton of stuff you want to do and you have to figure out, “Where is my time best spent?” Sometimes you’re making guesses. If you only do exactly what the customers want, you’re not necessarily moving your platform forward far enough. If we only focused on bug fixing, we probably wouldn’t have gotten a lot of the great functionality that we did out of 2012.

Mark V

…And the rounded corners…

Matt

Well, the rounded corners, yeah. Actually the rounded corners joke was just a random Power Hour joke that I just came up with on the fly. I’ve been using it since. Although I was in somebody’s session and they spent ten minutes building up that joke and it was really painful to watch. But the rounded corners was just WPF, that’s just the way it looked. But I made the joke about Interns coming in and sanding down the corners for three months. And I actually had an angry customer come up to me afterwards and say, “You guys spent three months working on rounded corners and yet you didn’t fix the Web Services Task” and storm off. “It was a JOKE!” At PASS, people usually get that something’s a joke. At Tech Ed, people expect Microsoft presenters to be more serious and jokes don’t always go over well.

Mark V

Even at a BI Power Hour?

Matt

When I did my first BI Power Hour at Tech Ed, I got a standing ovation when I did some of my lines, not because it was a great presentation, but I think the line was “I’m a programmer. What do I need real friends for when I can create them programmatically?” Standing ovation. And it wasn’t because it was funny. It was because the audience felt the same way. And I just felt really sad at that point. And the next day, I had people coming up to me offering to be my friend and saying, “I don’t have any friends on Facebook either. I had to stop using it.” And they just didn’t get that it was a joke. I did my Power Hour at the Boston user group and nobody laughed. There were some chuckles, but that was it. But then I realized afterwards, when I was talking with somebody else, that the audience actually thought it was real and that they felt sorry for me. So, they didn’t know they were supposed to laugh.

Back to planning. There are definitely different viewpoints on the team. One thing was related to Package Deployment versus Project Deployment. Every time you change functionality, but keep supporting a feature, your Test Matrix increases. So, the number of scenarios you have to test goes up. And we were really short on Test resources. And you can’t release something unless it’s properly tested. So, at one point, they wanted to say “No more Package Deployment Model; we’re just going to do Project because it means we can add more functionality because we’re not supporting these other things anymore.” It just did not make sense to take approach. I think the thing I had mentioned at Tech Ed was Single Package Deployment versus Full Package deployment. Long debates. But it came down to the architectural difference. We showed how much it would cost to implement Single Package Deployment and how much it would cost without. If it’s an extra month in development time, how many bugs can we fix in a month? How many other improvements can we make in a month? So, it’s a balancing act. I still think it’s the right decision. At the same time that we’re making those decisions internally, we’re talking to our MVPs, getting their feedback. I know the MVPs felt really strongly about Project Deployment, keeping it all together. And we were trusting in that. They’re basically the voice of our customers.

Wrapping Up

With Matt being so busy, and prepping for a session, I left the interview off there.

I have only had the chance to use SSIS 2012 one one project. And even with that small taste of this fabulous tool, I was tempted to just give Matt some applause and call it a day. I really appreciate the work and time that went into making SSIS 2012 such a tremendous improvement over previous versions of Integration Services.

I think Matt made some really great points here. The Big Data revolution was certainly a “disruptor” to common ETL. When dealing with data that is aging too quickly or in quantities that make taking the time to bring it into a data warehouse impractical, that certainly would disrupt common thinking around traditional ETL. While, as Matt points out, the need for traditional ETL will remain, there is some need on the part of those of us in the industry to re-assess what ETL looks like in some cases. It’s not always going to be a series of SSIS packages running on a server and populating a data warehouse. Sometimes, it will be information workers using Power Query to bring data from many sources into Excel.

As far as the Power Hour, that holds so many features that I strive to put into my own presentations. Humor is a huge one. There is a lot of research that shows that people learn better when they are having fun. Not to mention that an audience that is having a good time is less likely to throw rotten tomatoes; they stain, you know. Combine that with using features of the tools in creative ways, and you’ve really got something. I love finding new and exciting uses for technology. I often think of Ed Harris’ great line as NASA’s Gene Krantz in Apollo 13, “I don’t care what anything was DESIGNED to do; I care what it CAN do.”

I liked hearing from Matt that there is often a lot of debate within the SSIS team when it comes to features. it should remind all of us of time spent on project teams in our own work. The point this raises is that we need to remember that Microsoft, like any other organization, has finite resources that need to be spent in the best way they can. I hope we can all keep that in mind when we wonder why certain features haven’t gotten much love or don’t work the way we would want them to.

Matt’s point about MVPs is an important one. Along with what prestige may come from receiving the MVP award, there is also responsibility to serve as a voice for the Community as a whole. Being an MVP is not about getting to wear that MVP ribbon at Summit or a pretty trophy; it’s about leadership, with benefits and obligations along with it.

That brings us to the end. Even though my second interview was with Kamal Hathi, that happens to be the longest one as well. Since I have the typing skills of a rainbow trout, transcribing the audio for these interviews is a long process. As such, I will aim to have the post on my interview with Kasper de Jonge (Blog|Twitter) next week and the one with Kamal the week after. Thanks for your patience.

My Interview For Louis Davidson’s “Why We Write” Series

SQL Server MVP and author Louis Davidson (b|t) recently started a blog series called, “Why We Write.” His plan is to survey fellow writers/bloggers who make their living doing something other than writing to see why they spend their free time writing. For his first interviewee, Louis chose SQL Server MVP, author, and PASS Executive Committee member Thomas LaRock (b|t). You can read that interview HERE.

I was honored, and a bit flabbergasted, when Louis asked me to go second. You can find that interview HERE. Thanks very much, Louis.