In this post a crosstab with multiple detail rows is created. I used a question on the birt-exchange forum as a starting point for writing this post and I used the .csv file that was attached to that same question as the datasource. This is a link to the question

If you don’t feel like following the link, the person.csv file contains these rows:

Computed Column
First add a computed column to the data set. Actually it’s nothing more than a static value, that will be used as a dimension in the cube that will be created in the next step.

The Cube
Create a data cube with two dimensions: one on the PK field and one on the computed column justANumber.
Next create summary items for both the person and the spouse’s names and their birthdays. Put all of these under the same Summary Field and make sure to edit the Data Type to String and the Function to FIRST:

The Crosstab
From the palette drag a crosstab item to the report layout, then take these steps:

  • drag the grpPK dimension to the columns area
  • drag the grpNumber dimension to the rowss area
  • drag the summary fields name and spousename to the summary area
  • create a grid (1 column, 2 rows) in the rows area
  • create two other grids (1 column, 2 rows) in the name and the spousename columns in the summary area

This is what you should have until now:

Now let’s move on:

  • Create labels “Name:” and “Date of Birth” in the grid in the row dimension area
  • Drag the name and the spousename fields – they are already in the crosstab – into the first line of the grid that is in the same cell
  • Drag the DOB and the spouseDOB fields from the cube into the second line of the grids. For some reason this can’t be done in 1 step, you first have to drag it underneath the grid, then drag from the new column that is in to the grid and finally, remove the newly created column and choose “no” if you are asked if you like to remove unused bindings

Now the crosstab should look like this:

And, after doing some formatting of the gridlines and setting some visibility properties, this is the resulting report:


With “some kind of column grouping”, I mean that the output of the report looks like this:

What you see are employees listed by city, with each city in its own column. If you have a better name for this instead of “column grouping”, please post it in the comments and I’ll be glad to take it over in the title of this post if I like it.

In order to get this output, you need to have the rows in the data set numbered by city. That field will be used as the row dimension in a crosstab table. BIRT does not provide out of the box functionality to get this rownumbers in the data set, so I decided to share my approach to do it.

The Query
The data source for this sample report is the ClassicModels database. The data set query selects all employees and the city they work in:

from   offices o,
       employees e
where  o.officecode = e.officecode
order  by o.officecode,

Getting the rownumber
There are a couple of ways to get this number:

  • an analytical function in the query
  • the GROUPROWNUM function from the group functions plugin
  • other creative SQL solutions

I will further talk about the first two solutions.

The Analytical Function
This one is easy. If your database provides analytical functions, it is enough to adapt your query, so that it selects the rownumber by city. The query – tested in an Oracle database – looks like this:

       row_number () over (partition by order by e.lastname) as cityRownum
from   offices o,
       employees e
where  o.officecode = e.officecode
order  by o.officecode,

The GROUPROWNUM function
To get this one to work, you need to install the group functions plugin. You can find a download and all you need to know about that in this Devshare post

After installing the group functions plugin, add a computed column to the data set. Use the GROUPROWNUM function and choose CITY in the Aggregate On field:

The Crosstab
The final step to complete the report is creating the crosstab.
First create a data cube with two Dimensions (city and cityRownum) and one Summary field (lastname). Make sure to use the FIRST function in the summary field.

The data cube should now look like this:

Drag the cube to you report and make cityRownum a row dimension and CITY a column dimension:

After some formatting of the styles (removing the grid lines) and the crosstab (Hide Measure Header, set width of row Dimension to zero) your report should produce the output as shown on top of this post.

This post is the answer to a question in the comments of one of my earlier posts:
Follow the link if you want more info on the drop property.

Miguel asks what the difference is between “Detail” and “All” as a value for the drop property. An example will show the difference.

In this example I have this very simple query that selects from the ClassicModels sample database:

from   offices o,
       employees e
where  o.officecode = e.officecode
order  by o.officecode,

This is what the report layout looks like:

Notice the grouping on the CITY field and that the field appears in the header as well as in the footer row.

Now when I select the cell in the header row that has the CITY field in it, and choose “Detail” as the value for the drop property, this is the result:

And when I choose “All”:

So the difference is in the footer row. Drop All removes all occurrences of the CITY field beneath the header row, while Drop Detail only hides the CITY field in the detail section and still shows it in the footer row.

In a scripted data set in BIRT you don’t need to define parameters that are bound to report parameters. That is because you can refer to report parameters directly in the open or fetch script, like in this example of an open script:

cities = [];
if (params["pCOUNTRY"]=="France") {
   cities[0] = "Paris";
   cities[1] = "Marseille";
   cities[2] = "Lille";
} else if (params["pCOUNTRY"]=="USA") {
   cities[0] = "New York";
   cities[1] = "Chicago";

When you are dealing with a scripted data set that is bound to a nested table, and you need the parameter value to be a value from the outer table, you’ll need a different approach.

In that case, select the data set, choose Edit data set and add a parameter to the scripted data set.

Next, bind a value from the outer table to the parameter. Select the inner table, go to the bindings tab and select the Data Set Parameter Binding button:

And finally, write the open script like this:

cities = [];
if (inputParams["pCOUNTRY"]=="France") {
   cities[0] = "Paris";
   cities[1] = "Marseille";
   cities[2] = "Lille";
} else if (inputParams["pCOUNTRY"]=="USA") {
   cities[0] = "New York";
   cities[1] = "Chicago";

For some reason it took me some time to find out about the use of inputParams[“datasetParam”]. So I hope that if you are dealing with the same issue, this post has eased your search.

Just a couple of days after I posted my Columns to Rows in BIRT Data Set I ran into this birt-exchange forum post where Robilco provides a solution for the exact same problem. I like his approach and he was kind enough to let me write about it on this blog.

This approach has no need for computed columns and a joint data set, just a scripted data set and some more scripting.

Problem Description
The starting point is a .csv file with this content:

The different types of budgets need to be in rows instead of in columns.

CSV data set
Like in my previous post, a data set is created for the purpose of the .csv file. But this time some scripting is added.

In the BeforeOpen method of the data set, two variables are initialized:

idx = 0;
budgetRecs = [];

In the onFetch method of the data set, an array is created with the budget data. The budget types are already separated in this step:

budgetRec = {department: row["Department"], budgetType: "Infrastructue", budget: row["Infrastructure"]};
budgetRecs[idx++] = budgetRec;
budgetRec = {department: row["Department"], budgetType: "Training", budget: row["Training"]};
budgetRecs[idx++] = budgetRec;
budgetRec = {department: row["Department"], budgetType: "Communications", budget: row["Comms"]};
budgetRecs[idx++] = budgetRec;
budgetRec = {department: row["Department"], budgetType: "Consumables", budget: row["Consumables"]};
budgetRecs[idx++] = budgetRec;

Scripted data set
The scripted data set won’t do anything else but fetching the data that is entered in the array in the onFetch method of the CSV data set.

The open method initalizes the array index:

idx = 0;

The fetch method (not the onFetch method!) reads the array and moves the data into the data set columns.

if (idx < budgetRecs.length) {
	row["Department"] = budgetRecs[idx].department;
	row["BudgetType"] = budgetRecs[idx].budgetType;
	row["Budget"] = budgetRecs[idx].budget;
	return true;
	return false;

Dummy grid
To make sure the CSV data set is executed, drag a grid element to the report layout and bind it to the CSV data set. As long as no report items are put in the grid, it will not be visible when the report is executed.

To check if things work as expected, drag the scripted data set to the report layout and run the report. As you can see: every budget type is on a separate row:

Many thanks to Robilco for providing the inspiration to this post!

This article describes a way to transform column data into row data with the help of a scripted data set, computed columns and a joint data set.

Most of the time I use SQL to perform the task of transforming columns to rows, but some time ago, when helping out someone on the birt-exchange forums, I needed to come up with a different approach. The poster got his data from a .csv file, so the use of SQL was no option. (See bottom of this post for a SQL based solution).

Problem Description
A pie chart needs to be created based on the data in a .csv file:

The different types of budgets – Infrastructure, Training, Comms and Consumables – are all in separate columns and have to become the slices of the pie chart. If we take the csv based data set as it is, there is no unique column that can be selected as a values series field.

CSV Data Set
First of all: create a data source and data set on the .csv file. This is pretty straightforward.
Also, add a computed column that will always contain the value 1 and name it join_col. We will need this column when creating the Joint Data Set in one of the next steps.

Scripted Data Set
Next, create a scripted data set that has two columns:

  • join_col
  • col_number

The join_col field will always contain the value 1 and will be used to join this data set to the .csv data set created in the previous step.
The col_number will add up for each row in this data set and the number of rows needs to correspond to the number of columns in the .csv that you want to transform to rows. In this case we need 4 rows as there are 4 types of budget in the .csv file.

To create a scripted data set take these steps:

  • create a new data source → make sure you choose Scripted Data Source and enter an appropriate name, e.g. dsScripted
  • create a new data set → Select dsScripted as the datasource and enter an appropriate name, e.g. dsScriptedData
  • add both join_col and col_number as Integer type columns
  • in the open script of the data set, add this code:
    joinCols = [];
    colNums = [];
    for (i=0;i<=4;i++) {
       joinCols[i] = 1;
       colNums[i] = i+1;
    idx = 0;
  • in the fetch script of the data set, add this code:
    if (idx < numCols.length) {
    	row["join_col"] = joinCols[idx];
    	row["column_num"] = colNums[idx];
    	return true;
    	return false;
  • If you now Edit the data set and select Preview Results, you should see this:

Joint Data Set
In the joint data set, we will now join the csv data set and the scripted data set together based on the join_col field that exists in both data sets. Every row in the csv data set is joined to every row in the scripted data set. So for every department there will be 4 rows in this data set:

Next step is to create two computed columns. One will hold the budget type and the other will hold the actual budget on each row. The first column, budgetType, has an expression like this:

case 1:
case 2:
case 3:
case 4:

The second column, budget, has an expression like this:

case 1:
case 2:
case 3:
case 4:

This is what you should see when you Edit the data set, select Preview Results and scroll to the right:

With the joint data that we have created, it’s a piece of cake to create the pie chart. Put the budget column in the Series Definition, the budgetType column in the Category Definition and the dsBudget::Department column in the Optional Grouping:

The result now looks like this:

*SQL Solution
If the data does not come from a csv file, but you are selecting it from a data base, you don’t have to worry about scripted data sets, computed columns and all the other fancy features I mentioned in above article. You can write a query like this and you are ready to move on:

SELECT Department,
       'Infrastructure' as budget_type
       Infrastructure_budget as budget
FROM   your_table
SELECT Department,
       'Training' as budget_type
       Training_budget as budget
FROM   your_table
SELECT Department,
       'Commissions' as budget_type
       Comms_budget as budget
FROM   your_table
SELECT Department,
       'Consumables' as budget_type
       Consumable_budget as budget
FROM   your_table

If you want to put a filter on the values series of a BIRT chart, you’ll need some kind of workaround. The values series can’t be used after selecting the filter button on the Select Data tab of the chart dialog. The easiest way to accomplish this, is to do add the grouping and aggregation in the query and then put a filter on the aggregated data column. However, if you want to use the ungrouped data in other parts of your report, you might prefer another workaround.

Let’s say you want to know from the classicmodels database the top 5 of employees that have taken the most orders.

The Data Set
This query selects the employee’s lastname and the ordernumbers of his customers:

select e.lastname,
from   orders o,
       customers c,
       employees e
where  o.customernumber = c.customernumber
and    c.salesrepemployeenumber = e.employeenumber

The Report Table
As it is not possible to use a filter directly on the values series of the chart, we need to find some other item to put the filter on: a report table. The chart will be created in the header row of the table.

Take these steps to create a table:

  • drag a table element from the palette to the report, choose 1 row and 1 column and bind it to the data set you created
  • right-click on the table and select ‘Insert Group’ to add a group ‘grpEmployee’ to the dataset and choose lastname in the Group On field
  • select the table, go to the Binding tab and add an aggregation like this:
  • select the table, go to the Groups tab and Edit the group ‘grpEmployee’ to add a filter like this:
  • in the same edit group dialog, select Sorting and add a sort on row[“aggcount”] descending

The Chart
Now we are ready to create the chart:

  • drag a Chart item from the palette into the header row of the table
  • select the chart type you like (I chose the a simple Bar Chart)
  • move on to the Select Data tab, make sure the Select Data From Container checkbox is checked and then select row[“aggcount”] at the Value Series and row[“grpEmployee”] at the Category Series:

The Result
To clean up things a bit, you can remove all the rows from the table, except for the header row and run the report. The result should look like this:results

Grouping within a BIRT data set
In the context of the Plug In 2 BIRT Contest for Autumn 2012, I created the Group Functions plugin. You can find the plugin, documentation and a sample report on birt-exchange Devshare. The group function aggregations in the plugin make it possible to generate the SUM, COUNT or ROWNUMBER by a group of data within a BIRT data set. With the right combination of these functions and the use of filters you can create many-to-many relations in a joint data set by first applying grouping in your data sets. You can read more on this in the pdf that you can download from the link mentioned earlier.

Here’s a screenshot of how a computed column is created with the GROUPSUM function:

And this is the result in the preview results area of the data set:

You can vote
The contest closed on november 30 and now registered birt-exchange users can vote for the plugin they like most on birt-exchange (the poll is in the sidebar on the right). So if you like this feature, give it a vote. Thanks!

In this post a small report is built that selects a text from the database. This text contains codes which need to be replaced by data from another data set.

Let’s say you have the texts that need to appear on a letter in the database and one part of it is this salutation: “Dear <title> <name>,”.

There are two datasets in the report:

  • letter_text, that selects the salutation text
  • customer, that selects the title and the name of the customer the letter will be sent to

Build the report layout following these steps:

  • create a table element with data set = letter_text
  • in the detail row of the table, create a second table with data set = customer
  • in the detail row of the customer table, create a Dynamic Text item with expression = eval(row._outer[“TEXT”])

So it comes down to save the text parts in the database as javascript that can be executed in the Dynamic Text item expression.  This is how the database tables are created:

CREATE TABLE letter_texts (text VARCHAR2(2000));
CREATE TABLE customers (cust_id NUMBER(9), title VARCHAR2(20), cust_name VARCHAR2(200));
INSERT INTO letter_texts VALUES ('"Dear " + row["TITLE"] + " " + row["CUST_NAME"] + ","');
INSERT INTO customers VALUES (1, 'Mr', 'Smith');


The report now looks like this:

And this should be the result:

We had to build an Eclipse RCP application where SQL statements could be edited. At first, we used a StyledText widget to do this. Soon, we were in need of extra features like syntax coloring, content assist etc. Why not use a ‘ready to use’ SQLEditor to open a SQL file you say? Well, we wanted to ’embed’ the area in our own editor. The SQL area should be part of the editor and not the only thing in the editor!

After a while we discovered a normal StyledText was not the way to do this. After some investigation we found a widget which fits our needs perfectly.. The SQLStatementArea widget which is part of the Eclipse Data Tools Platform (DTP). After installing all necessary plugins, coding could begin..

The SQLStatementArea widget can be used like any other widget in a Editor.

ISQLSourceViewerService viewerService = new CustomSQLSourceViewerService();
SQLStatementArea sta = new SQLStatementArea(this, SWT.BORDER, viewerService, true);
  • this: the SWT Composite where the area should be in
  • SWT.BORDER: we want a border to be visible, other styles can be added as usual
  • viewerService: an instance of a class implementing ISQLSourceViewerService (talk about this in a minute)
  • true: when this is true, line numbers are visible

When instantiating the SQLStatementArea a class implementing ISQLSourceViewerService is needed as mentioned earlier. In this class a help method needs to be implemented which will define the PartitionScanner for the document. This method determines where the text has to be scanned for.

This is the code of the method we used for this class:

public void setUpDocument(IDocument doc, String dbType) {
SQLPartitionScanner sqlPartitionSanner = new SQLPartitionScanner();
if(doc instanceof IDocumentExtension3)
IDocumentExtension3 extension3 = (IDocumentExtension3) doc;
FastPartitioner _partitioner = new FastPartitioner(sqlPartitionSanner, new String[]
extension3.setDocumentPartitioner(ISQLPartitions.SQL_PARTITIONING,     _partitioner);

Next in line is the SourceViewerConfiguration. This class is responsible for syntax coloring, content assist etc. In our application we’ve used the code from here, because it is more advanced and extensive (multi-line comments for example). However , the code in the available plugins is sufficient for this tutorial. The configuration class to use for a SQL configuration is the SQLSourceViewerConfiguration which can be found in the org.eclipse.datatools.sqltools.sqlbuilder.views.source package.

SQLSourceViewerConfiguration sqlSourceViewerConfiguration = new SQLSourceViewerConfiguration();

There are still a few things that need to be added in order for the SQLStatementArea to work. As most widgets, it needs a layoutdata, but it also needs a document which holds the actual input. Because were going to add databinding later, we’re setting up an empty document for now.

sta.setLayoutData(new GridData(GridData.FILL_BOTH));
document = new Document();

Our next step is to add databinding so the text is displayed correctly and the object is updated immediately  This is easy because the SQLStatementArea holds a StyledText widget which we can use to create the binding. When you’re not familiar with databinding you can check out this great tutorial or just set the text of the document for now (instead of empty).

IObservableValue observeTextObserveWidget = SWTObservables.observeText(sta.getViewer().getTextWidget(), SWT.Modify);
IObservableValue sqlSql_statementObserveValue = EMFEditProperties.value(editingDomain, Literals.DOCUMENT_SQL_STATEMENT__SQL_STATEMENT).observe(sql);
bindingContext.bindValue(styledTextStatementObserveTextObserveWidget, sqlSql_statementObserveValue, null, null);
  • editingDomain: we’re connecting the binding to our editingdomain for undo/redo functionality and dirty state (use EMFProperties when not using an editingdomain)
  • sql: the EMF object which holds our SQL statement

When running the application, the result is an editor which holds a great area for editing SQL.

SQLStatementArea integrated in Eclipse Editor

SQLStatementArea integrated in Eclipse Editor

The only thing is, when hitting CTRL+SPACE now, content assist doesn’t work. This is because we have to ‘bind’ the content assist of our SQLStatementArea with the Eclipse content assist.

handlerService = (IHandlerService) editor.getSite().getService(IHandlerService.class);
IHandler cahandler = new AbstractHandler() {
public Object execute(ExecutionEvent event) throws ExecutionException {
return null;
if(contentAssistHandlerActivation != null){
contentAssistHandlerActivation = handlerService.activateHandler(ITextEditorActionDefinitionIds.CONTENT_ASSIST_PROPOSALS,

Now, when starting the application again, content assist will work!

SQLStatementArea integrated in Eclipse Editor with content assist

SQLStatementArea integrated in Eclipse Editor with content assist

When using this in an application we recommend to use the code mentioned earlier or to write your own. This way you can have far more keywords, multi-line comments, content formatting (uppercase/lowercase) and so on. But we hope this tutorial gives you a great start!