Some Tips on Building Business Central Reports
I’ve noticed that many of my colleagues don’t enjoy creating Business Central reports, which is a shame because building a good, non-trivial report can be very interesting and requires more skill the some would admit.
Here, I’ve listed a few things I’ve noticed others often overlook when creating reports (whether out of a lack of knowledge or interest). Some tips are straightforward, some are opinionated, but most are not sufficiently explained in books or blogs.
Before we start
Old reports are often full of bad practices that result from years of “good enough” work. These reports contain hacks that can now be replaced with modern system functionalities (e.g., language selection). Even Microsoft’s reports can be messy and overly complicated.
The next time someone asks you to copy a report from an older system, break the habit and create the report from scratch instead of copy-pasting walls of unmaintainable code. In the long run, this will save time.
That said, not every old report needs to be rewritten, and there are still valuable things to be learned from them.
Understand datasets
A dataset in Business Central is constructed as a tree structure (by nesting dataitems), but the layout engine receives it as a flat table. Datasets are built following two principles:
1. Nested dataitems
Nested dataitems correspond to the Cartesian product (CROSS JOIN) of tables. For example:
dataitem(A; TableA)
{
column(Col1; A.Foo) {}
dataitem(B; TableB)
{
column(Col2; B.Bar) {}
}
}
produces a dataset like this:
| Col1 | Col2 |
|---|---|
| Foo1 | Bar1 |
| Foo1 | Bar2 |
| Foo1 | Bar3 |
| Foo2 | Bar1 |
| Foo2 | Bar2 |
| Foo2 | Bar3 |
If a full cross join isn’t needed (and it usually isn’t), you can use DataItemLink and other properties to filter rows and create different types of joins.
2. Adjacent dataitems
Adjacent dataitems correspond to a union of tables. For example:
dataitem(A; TableA)
{
column(Col1; A.Foo) {}
}
dataitem(B; TableB)
{
column(Col2; B.Bar) {}
}
produces a dataset like this:
| Col1 | Col2 |
|---|---|
| Foo1 | NULL |
| Foo2 | NULL |
| NULL | Bar1 |
| NULL | Bar2 |
| NULL | Bar3 |
In newer versions of BC, you can view the dataset structure directly. From the report’s request page, choose Send to… and select Microsoft Excel Document (data only). Exporting a few datasets will give you a clear understanding of how they look.
Dataitems and column naming
Dataset column names cannot always match AL column names, as dataset names cannot contain spaces. Instead of relying on autocomplete to generate monstrosities like column(Item_No_; "Item No."), take the time to write concise names in PascalCase.
I also recommend using PascalCase for dataitems and other variables. For reports with multiple dataitems, consider prefixing column names with the dataitem name (t.g. SalesLineItemNo).
All good programming teams have strict conventions about naming things in code. Only AL developers have freedom to write novels in variable names1.
Tagging dataitems
Unfortunately, AL does not allow multiple datasets in a single report, which can make working with multiple layout tables challenging. I often see complicated grouping and filtering or unnecessary nesting to overcome this.
A better approach is to “tag” your dataitems with dummy columns. Place the dataitems adjacent to one another, create a column with a constant value, and filter by this column in the layout:
dataset {
dataitem(A; TableA)
{
column(ThisIsDataitemA; 1) {}
// other columns
}
dataitem(B; TableB)
{
column(ThisIsDataitemA; 1) {}
// other columns
}
}
The column ThisIsDataitemA will only have a value of 1 for rows from dataitem A, and NULL elsewhere. This effectively simulates multiple datasets.
Performance considerations
This tagging technique can also improve performance. For example, if a requested report lists ledger entries and includes company information in the header, many developers might write the dataset like this:
dataset{
dataitem(GlEntries; "G/L Entry") {
column(EntryNo; "Entry No.") {}
column(AccNo; "G/L Account No.") {}
column(DocumentNo; "Document No.") {}
column(PostingDate; "Posting Date") {}
// other ledger columns
column(Name; CompanyInformation.Name) {}
column(Phone; CompanyInformation."Phone No.") {}
column(Email; CompanyInformation."E-mail") {}
column(Address; CompanyInformation.Address) {}
}
}
trigger OnPreReport()
begin
CompanyInformation.Get()
end;
var
CompanyInformation: "Company Information";
Report like this can have tens of thousands of rows in the dataset. Above code will copy constant company information to each row, which will unnecessarily slow down the report. We will get faster report if we construct it like this (test it if you don’t believe me):
dataset{
dataitem(CompanyInformation; "Company Information"){
column(CompanyInfo; 1) {}
column(Name; CompanyInformation.Name) {}
column(Phone; CompanyInformation."Phone No.") {}
column(Email; CompanyInformation."E-mail") {}
column(Address; CompanyInformation.Address) {}
}
dataitem(GlEntries; "G/L Entry") {
column(GlEntry; 1) {}
column(EntryNo; "Entry No.") {}
column(AccNo; "G/L Account No.") {}
column(DocumentNo; "Document No.") {}
column(PostingDate; "Posting Date") {}
// other ledger columns
}
}
Think about report transaction
Reports can be run within different transactions by specifying the TransactionType property. If you are working on a report that takes significant time to execute, consider changing the transaction type.
Often, it is not necessary to lock tables or wait for others to release locks, so using Browse as the transaction type is a good choice.
Use labels
Instead of declaring labels as global variables and adding them to the dataset, use the labels section it the report. This approach reduces the number of lines of code and decreases the dataset size (labels are transferred as report parameters).
For field captions, you can use the IncludeCaption property on columns to avoid defining unnecessary labels (unfortunately, this puts the label it the dataset).
Additionally, lock tabels labels whenever it makes sense. For example, if you’re creating a report tied to a specific jurisdiction (e.g., a tax report), there’s no need to provide translations for the report’s labels. Avoid wasting time on translations in such cases.
Avoid UserID and CompanyName
Globally available values like UserID and CompanyName often should be replaced with values like User."Full Name" or "Company Information".Name.
Avoid scripting in layout
While it’s acceptable to calculate a column sum in the layout or hide a row based on a dataset value, excessive scripting in the layout should be avoided. If you find yourself nesting multiple IIF statements, concatenating numerous strings, or running χ² tests in the layout, you are likely doing it wrong. Logic should be implemented in AL, where it is expected to reside.
Controversial: Prefer RDLC for layouts
Comparing to RDLC, Word and Excel are more familiar technologies and provide additional functionalities, but we rarely need those extra features. RDLC, being a textual file, allows for easier editing and browsing directly from VS Code. Moreover, once you become familiar with the Report Builder, you may find it easier to use than Word.
With RDLC, we get more meaningful Git diffs2. For example, suppose you’re reviewing a PR where the author has changed only one label. Would it be easier to review if the layout were in RDLC or Word? In which case can you be more confident that the author didn’t inadvertently alter another label?
Of course, Word and Excel layouts have their place. For prints with a lot of text (e.g., contracts), Word is a better choice because it supports proper paragraph typography.
Unanswered question
We have three methods of creating Excel layout:
- Using the
Excel Buffertable - Using a report object with an RDLC layout
- Using a report object with an Excel layout (available in versions after BC20)
For the same data, which of these three methods is the fastest? Are there any significant differences?
I’m too lazy to test this with the latest version of BC, but on older versions I observed that second method was significantly faster than first (but his could be only due to more optimized SQL queries when using a report object).
Understand tablix
One common issue I see many beginners struggle with is working with tables in RDLC (also known as tablix). I believe this is largely because they don’t understand that dataset creation in AL is independent of the tablix layout in RDLC.
Here are two basic facts to keep in mind when creating a tablix:
- The dataset is a single table.
- The default group in a tablix,
(default), repeats for all rows in the dataset.
For more information, look into Microsoft’s documentation on tables in Report Builder. Unfortunately, I haven’t found documentation that is more specific to Business Central.
Looks matter
Most clients may not notice if your font choice is inconsistent or if your text is misaligned (though subconsciously, they might feel it). However, that doesn’t mean you shouldn’t care. Good design practices will set your work apart from others and help prevent layout issues.
Use proper RDLC page margins
Instead offsetting all elements on layout, set up the page margin in the report properties. After that set up working area (white rectangle) width to match available width3.
I usually set 1cm margin on top and bottom and 1.5cm on sides.
Font selection
If you want your report to look modern and fresh, consider using Segoe UI instead of the default Arial.
Try to limit font sizes to two values, such as 9pt and 14pt.
Use tablix for non repeating layout
If you delete the (default) group from a tablix, the tablix will no longer grow with the dataset. Use this technique to create a consistent layout, rather than placing dozens of text boxes. It will look nicer, but also the layout will be responsive to long strings which would otherwise overflow over other elements.
This approach can be applied in various scenarios, such as invoice headers, signature lines, address information, etc… Of course, you will probably want to remove all borders from the tablix.
Table formatting
For tablix elements that go over dataset (have repeaters), here are some styling tips.
- Remove all table borders, except for the border between the table header and the table body.
- Align numerical columns to the right.
- Align all other columns to the left.
- Ensure the horizontal alignment of the column header matches the alignment of its respective column.
- Vertically align the header row to the bottom.
- Vertically align all other rows to the top.
- Make text in header row bold.
- Distinguish aggregate rows from regular data rows.
- Include caption with the group name in the aggregate row, for example: Total per customer Cronus.
- If you need to indent labels in a subgroup, use cell padding instead of adding spaces at the start of the value.
These are general table styling tips, and you can find online many more ideas on how to make tables look good. But remember, consistency is the key. It is better to have custom reports that look like system reports.
I can literally write a Haskell program in the name of AL variable:
var "main = print $ sum [1 .. 100]": Integer↩︎Speaking of Git and reports: always build the project before committing, as rebuilding the project can sometimes change reports. ↩︎
For example, if you are printing on A4 (21cm × 29.7cm), and you set your left and right margin to 1cm, then you should set working area width to 21cm - 1cm - 1cm = 19cm. ↩︎