Overcoming Technical Challenges in Self-service BI

In this series I cover business intelligence topics such as self-service BI, data-driven decision-making, promoting a data culture, increasing user adoption, and more.


In my last BI article, Is Self-service BI a Myth?, I covered good reasons and bad reasons for launching a Self-service Business Intelligence Program, highlighting the importance of having reasonable expectations. Now I’ll cover the technical challenges you may face while planning and implementing your own program and suggest some solutions.

Why is IT developing all reports in the first place?

This is an important question to ask yourself before you begin. Here are the most likely technical reasons why the IT department or report writers in finance are still writing all your company’s reports.

Unintelligible Data Model

Every application needs a database to process transactions and store data. These databases use a relational model that is excellent for day-to-day operations, but is not well-suited to the needs of BI reporting. If your reports are using application databases as their source (or a copy of them), the naming of tables and fields and the structure of the database may be completely unintelligible to the average business user. In a large ERP application, for example, a report analyst may need to join a dozen different tables simply to bring the concept of ‘customer’ into a report. In this case, writing reports requires technical skills that would be impractical to transfer across your organization.

Complex Transformations

Analysts may be performing complex transformations on the data to meet business rules and definitions before displaying them in a report. For example, your business may have unique definitions for margin, discounts, or other measures that must account for your industry and unique processes. If analysts are relying on a relational model like above, they must perform these calculations at the dataset or report level. Business users should not need to perform this work everytime they want to create their own reports.

Compensation for Poor Data Governance

Similarly, analysts may be performing cleanup of known bad master data as a compensation for poor data governance in the source application. For example, they may be joining duplicate categories, customers, or products together so they only show up once in reports. Business users will not know how to do this, nor should they need to.

No Security Layer

Most applications have a security layer above the database layer the comes into play only when access is made via the user interface of the application itself or APIs that are intended for integrations rather than reporting. Therefore, giving direct access to an application database for the purposes of reporting is usually an all or nothing endeavor. For example, if you wanted to give a procurement analyst access to create supply chain reports, you’d also be giving them access to data they don’t need such as the General Ledger. And if the source application is used by multiple child companies of one larger parent, you may even be giving them access to data from other companies.

Data Integrity

IT has an obligation to ensure that business users and management are getting the truth when it comes to the measurement of business performance. When all of the above challenges exist, the safest way to do that is to write the reports themselves and strictly control access to the underlying data.

Complicated Visualization Tools

One obvious reason that IT analysts and not business users are writing all the reports is that your visualization tool may be non-intuitive and made primarily for technical people. I’m looking at you SQL Server Reporting Services (SSRS) and Crystal Reports!

How do we overcome these technical challenges?

The answer is simple, but not easy: Data governance, data modeling, and user-friendly visualization tools.

Data Governance

The two most common types of bad data are duplicate master data (e.g. duplicate customers, products, and vendors) and invalid formatting of data (e.g. too many numbers in the zip code field, phone number fields with letters). In an ideal world, you would spend the time to cleanup historical data in your source applications and put processes in place to keep future data clean. But that isn’t always practical so you may want to take this in phases. Just know that the more you do up front on data governance, the easier the data modeling stage will be.

Data Modeling

This is where the magic happens and where most of the challenges above will be solved. I won’t go into all the technical and architectural options because there are many. The basic idea is that data will be moved from your application databases to a data warehouse or data lake which is more suitable for reporting and analytics. As part of this move, data will be transformed from a relational model into a dimensional model (or even better, a tabular model) that is easier to understand for business users and has the agreed upon business measurements baked in. Access will be granted to business users based on a concept called row level security. When business users connect their visualization tool to the data model they will only see what they should. Think of this like role-based filtering.

User-friendly Data Visualization Tools

Now its time to pick a user-friendly visualization tool so that business users, with some training, can create reports and dashboards just like they would Word and PowerPoint documents. Pick the tool that integrates most easily into your current ecosystem. For example, if you already use other Microsoft services like Azure and Office 365, Power BI is probably your best option. Whatever you choose, low friction is the key to user adoption.

Do I need to hire a BI consulting partner?

If you have to ask, the answer is probably yes. Look for a BI partner that has worked with clients in your industry and has implemented self-service BI specifically. Many BI partners have hourly billing, which is a direct conflict with your interests around user enablement. Try to find one that uses fixed pricing for their projects and make sure their proposal includes ETL performance, setup of monitoring and alerting, row level security, and training to help you create a self-service culture. And of course, ask them about their approach to data modeling and how they will document the model for business users.

In the next article, I’ll give the same treatment to cultural challenges. Please let me know your thoughts in the comments!


Daniel Lucas is the founder of THRDparty Advisors which is on a mission to protect private capital from cyber risk, digitally transform portfolio companies, and maximize exit returns. They are IT executives advising PE firms and PE-backed companies through every investment stage. Visit https://thrdparty.com for service details, case studies, and pricing.

Previous
Previous

Building a Self-service BI Culture

Next
Next

Is Self-service BI a Myth?