From TSG Doc
Jump to navigation Jump to search

Building Experiments

An experiment in BrainStream consists of experiment definition tables, block files and common block files. The experiment may also need user defined functions. The entire experiment is summarized in the [#SecProjectFile|BrainStream Project file]]. In the following, each of these components will be discussed.

Experiment definition tables

The experiment definition tables (.edt) form the core of your BCI experiment. In these tables is specified which actions need to be executed at what time. For each experiment, four different tables are needed: the Actions table, the DataSelection table, the Dictionary table, and Trigger table. Together, the experiment definition tables are called the experiment definition file. Brainstream has its own internal editor for creating the tables, however, for backwards compatibility reasons it also supports excel files where each table is put in a different sheet. In the following, the experiment definition tables will be discussed in more detail.

Actions table

The Actions table specifies the actions that should be executed for each marker. Table 1 is an example of an Actions table:

marker time function feval looptick client
mrk1 EVENT fnc1,fnc2
1 fnc3
mrk2,mrk3 EVENT fnc4

Table 1: Actions table

The first column of the Actions table, the marker column, contains the names of all markers that elicit the execution of certain actions. If actions for the same marker are listed in different rows of the table, the marker name only needs to be specified in the first row. For example, in table 1, the actions for marker mrk1 are specified in two different rows, and the marker column can remain empty for the second of these rows. Multiple markers can be specified, separated by commas. Any of the markers from this list will trigger the execution of the associated actions. For example, in table 1, function fnc4 will be executed whenever either marker mrk2 or mrk3 arrives. The marker column can also contain a reference to another table (see Import-tables).

The second column (time) specifies at which timepoints, relative to the time of the incoming marker, the actions should be executed. The exact timing of execution can be specified in several ways. The action can be executed directly at marker onset, when a certain amount of data becomes available, some time after the marker, or when another marker arrives (see table 2). For example, table 1 specifies that functions fnc1 and fnc2 are executed at the onset of marker mrk1 whereas function fnc3 takes place one second after marker onset.

a number
another mrk
Executed at marker onset
Executed as data becomes available
Executed number seconds after marker onset
Executed at onset of marker mrk

Table 2: Specifying time of action execution

The third column (function) can contain one or more functions that will be executed in the order in which they appear in the table. So, in table 1, at the onset of marker mrk1, first function fnc1 and then function fnc2 will be executed. One second after the onset of marker mrk1, fnc3 will be executed. A number of BrainStream functions can be used in the function column. Alternatively, you can write your own user defined functions, which will be discussed in detail below.

The next three columns are optional. The feval column allows for specification of any functions that do not process any of the global variables. The looptick column can contain special functions that will be put into a loop by BrainStream. The client column can be used to direct execution of functions to another remote Matlab session (see Parallel Mode).

Although the order of these first columns in the table is arbitrary, it is best to keep it as described here. All subsequent columns are free to use for an arbitrary number of user defined variables (more on this later).

Trigger table

While actions give meaning to the events, the Trigger table implements the logic of the experiment program flow. It defines how each event possibly triggers the execution of other events. Conditional expressions implement runtime dependent triggered events, made possible by allowing global variables to be used in these expressions. An example would be to either trigger the next_trial event or next_sequence event based on the current state of the num_acquired_trials variable. Table 3 demonstrates corresponding Trigger table.

marker time fire datasource delay condition triggeraction
trial EVENT next_trial eeg 0 num_acquired_trials <= 10
trial EVENT next_sequence eeg 0 num_acquired_trials > 10
- print_output eeg 0 num_acquired_trials==10 mod

Table 3: Trigger table

As long as the number of acquired trials doesn't exceed 10, a next_trial event is triggered by the trial event, otherwise a next_sequence event is triggered. If no conditional expression is specified, corresponding event will always be triggered. Optionally, a delay can be specified, which value is default referenced relatively against the events timepoint time that triggers it. If it should be referenced against the actual moment of execution of the event, instead specify for the delay: 0,now. The datasource is only relevant if corresponding triggered event defines data (it has a DATA timepoint) in order to collect the correct data epochs. For example useful if a single marker triggers periodic data collection or a limited number of data epochs as determined by logic incorporated in the conditional expression.

Markers can also be triggered based on the state of the variables, i.e., watchdog triggered markers. Each time a global variable involved in the conditional expression is being modified, the conditional expression is evaluated and if true, corresponding marker is triggered. This is the default behaviour, however, the user can control which global variable(s) modifications and which actions ( mod, put, or mod and put) will start the evaluation of the expression. For instance: mod(num) means evaluate the expression only a mod-action for variable num. For watchdog triggered markers the marker information can be left empty or specify a dash -, the time column information can be left empty since triggering is not time-dependent.

DataSelection table

Sometimes a marker signals a time point around which data should be collected. For example, if you are building an ERP-based BCI, you might want to collect a certain amount of data after each stimulus. For that purpose, each marker can specify a segment of data that should come along with the event. In the Actions table, markers calling for [BrainStreamGlossary#SecD|data selection] have a DATA statement in the time column. The DataSelection table lists the markers that call for data selection and specifies the time period of data selection relative to marker onset.

The DataSelection table consists of a marker column, a begindata column, and an enddata column. Data selection may start before or after onset of the marker, indicated by negative and positive numbers respectively. The end of data selection can be before or after the onset of the marker specified in the marker column, when a new marker arrives (with or without extra timing), or when nothing happens for a specified period of time (timeout). If multiple endtimes, seperated by a comma, are specified, the one that happens first will end the data selection.

marker begintime endtime datasource
mrk1 -0.5 2
mrk2 0.5 mrk3+1
mrk4 0 mrk4, mrk5, 3

Table 4: DataSelection table

The table above shows an example of a data selection table. The data selection of mrk1 will start half a second before onset of the marker and end 2 seconds after the onset.The data selection of mrk2 will start half a second after the onset of mrk2 and will end one second after marker mrk3 arrives. Marker mrk4 will start data selection immediately and this will end when a new marker mrk4 arrives. In other words, after the first occurrence of marker mrk4, every new marker mrk4 will both end the previous period of data selection and start a new period. This sequence ends when marker mrk5 arrives or when no markers come in for 3 seconds. Note that if ending markers do not set the end of data selection and no timeout is specified, corresponding actions (with timepoint DATA) will never be executed.

Timing offsets can also be specified in number of samples. In that case, the number should be followed by a '#' symbol (for example, 1024#). Of course, time points that are specified in seconds will also be rounded off to the nearest sample number by BrainStream's internal processing. Importantly, the sample corresponding to 'begintime' is not included in the data selection, whereas the sample corresponding to 'endtime' is included. For some examples related to this issue, click [%BASWEB%.DocsSectionsExamplesDataSelection here].

The datasource column is optional. If multiple data sources are used in the experiment, the datasource column specifies from which source the data should be collected. If only one data source is involved, BrainStream will automatically collect data from this single source only.

Dictionary table

Markers are represented as numbers in for example the acquisition hardware or stimulus presentation modules, whereas in BrainStream they are represented as names (strings). The Dictionary table is required to translate incoming markers to their associated names.

The first three columns of the Dictionary table are fixed. In the marker column the marker names are specified. The second column specifies the marker type. The marker can for example be a stimulus or a response marker. The value column specifies the number with which the marker is represented outside BrainStream.

The datasource column is optional. If multiple data sources are used in the experiment, the datasource column specifies for which data source the dictionary information is meant. If only one datasource is involved, this column can be left out and BrainStream will apply the definitions to the single data source.

marker type value datasource
tone stimulus 10
voice stimulus 11
button response 128

Table 4: Dictionary table

Preventing conflicts with imported tables

BrainStream supports the use of [.DocsSectionsPlugIns imported] tables. When BrainStream is started, all experiment definition tables that are used in the experiment - including the imported tables - are combined in a process called [[BrainStreamPlugIns#TableExpansion|table expansion]. This means that all Action tables will be integrated into a single Action table, and the same is true for the DataSelection and Dictionary tables. Importantly, the information in the individual tables should not conflict. For example, you should prevent double definitions of marker names or numbers in the Dictionary tables.

User defined functions and variables

Functions in the 'function' column

In the Actions table, certain actions are assigned to markers. You can define the actions directly in the table, but it is also possible to specify actions by adding user defined functions to your table. The functions that you write may need certain variables as input. An arbitrary number of columns in the Action table can be used for these user defined variables.

If a function needs a user defined variable, you first need to get it from the global variables. This is done by putting a ‘get’ statement in the corresponding cell. For example, consider the following Actions table:

marker time function feval looptick client var1 ........ varN
mrk1 EVENT my_fnc1 get ........
mrk2 2 my_fnc2(c1) ........ get,put

Table 5: Example Actions table

At the onset of marker mrk1, user defined function my_fnc1 will be executed. This user defined function needs the user defined variable var1 as input. In order to make this variable available to my_fnc1, a 'get' statement is placed in the var1 column. Note that variable var1 is not available to the user defined function my_fnc2 that is executed after marker mrk2, as no 'get' statement is present in the var1 column after this function.

User defined functions must be written in the following format:

event = my_function(event,c1,c2,...)

The input and output argument 'event' is obligatory. Event is a Matlab type structure] variable. The fields of this structure contain copies of the current content of the variables with a 'get' statement in the table. In the above example, var1 was made available to function my_fnc1 with a 'get' statement. Thus, the my_fnc1 input argument 'event' will contain the field event.var1, which holds a copy of var1. In contrast, the input argument 'event' of function my_fnc2 will not contain the field event.var1 (no 'get' statement in the Actions table), but it will have a field called event.varN, which contains a copy of variable varN.
The additional input arguments (c1, c2, ...) are optional. You can enter [[BrainStreamGlossary#SecC|constants] there, if your function needs them. For example, in table 5, function my_fnc2 needs the constant c1 as an input argument. Note that in the Actions table you do not need to enter 'event' as the first input argument to your functions, as BrainStream will automatically pass the event structure to all functions.

Your user defined function might change the content of some the variables you use as input. If you want to save these changes, you can place a 'put' statement in the corresponding cell in the Actions table, which updates the modified variables to the global variables. In the example above, changes that my_fnc2 makes to varN are saved, but changes that my_fnc1 makes to var1 are not (no 'put' statement). For more information about loading, modifying and saving user defined variables, see Modifying variables.

Functions in the 'feval' column

In the feval column, you can specify functions that do not process any of the user defined variables. Examples are Matlab's [tic]/[toc] or [disp] functions. In the example below, tic and toc are used to evaluate the time it takes to execute function my_fnc1:

marker time function feval
mrk1 EVENT tic
my_fnc1 toc

According to the fixed order in which BrainStream executes actions, functions are executed in the order in which they appear in the table, first in the order in which they appear within each row and then in the order of the different rows. In the example above, the order of function execution is: tic - my_fnc1 - toc. The tic/toc function will therefore measure the time it takes to execute function fnc1.

In the following table, the execution time of multiple functions will be measured:

mrk2 EVENT fnc2 tic
fnc3,fnc4 toc,tic
fnc5 toc

In this example, the order of function execution is: fnc2 - tic - fnc3 - fnc4 - toc - tic - fnc5 - toc. Therefore, the first tic/toc combination will measure the time it takes to execute function fnc3 and fnc4, whereas the second tic/toc combination measures the time it takes to execute function fnc5.

Functions specified in the feval column can take user defined variables as input arguments, as in the following example:

mrk1 EVENT disp(Var1) get

In this table is specified that the value of Var1 is displayed when marker mrk1 arrives. As is the case for functions in the function column, a get statement is required to copy the content of Var1 from the global variables into the event structure.

Functions in the 'looptick' column

Looptick functions are a specific type of loop function that can be used when BrainStream is running in parallel mode. More information on these functions can be found in the advanced topic Looptick Functions.

Block files and common block files

To initiate your experiment in BrainStream, you need to compose one or more initialization run files, also called block files (.blk). Each block file contains or refers to all information needed to initiate the particular block of the experiment. For example, BrainStream needs to know to which data acquisition source it is connected, where to store output and where the experiment definition tables are located. The format of the block files is according to the Windows .ini file style with topics enclosed in brackets and subsequent lines to define [[BrainStreamGlossary#SecK|keys] that belong to this topic. If another bracketed line is encountered, a new topic is started and additional lines will add keys to this new topic. The notation of the topics and keys is in Matlab style, which means that every valid Matlab statement is possible here. For example, Matlabs comment character (%) can be utilized. An example of a block file is shown below. In this example, all minimally required topics with their keys are shown.

eeg = 'buffer://localhost:1972:biosemi_active2'

% keys specific to the eeg data source 

sendMarkerFunction = 'sndMidiMarker'

ExperimentDefinitionFile = '/Volumes/Data/ExpDefs/SubRhythm/SubjectiveRhythm.xls'
OutFolder = '/Volumes/Data/Experiment/'

Block = 'subjective_rhythm'

Additional possible topics with their keys are listed here.

In addition to the listed topics and keys you can add your own topics and keys, as long as they are not used by BrainStream itself. The advantage is that these items are readily accessible for your own written user defined functions throughout the whole experiment. Information in the block files can be accessed using the [[BrainStreamProgrmmersGuide#GetBlockVal|bs_get_blockvalue] function.

In the blocksettings definitions, also direct assignments to sub fields are possible:

pp.downsample.targetFs 	= 256
pp.bcd.doplot		= 1;

This is especially convenient for grouped settings that need to be processed together. For instance, when calling functions from the FieldTrip toolbox, parameters are passed the functions through a Matlab structure (in general it is named cfg). In the blocksettings define:

cfg.method      = 'mtmfft'; 
cfg.output      = 'fourier';
cfg.foilim      = [60 60];
cfg.taper       = 'dpss'; 
cfg.tapsmofrq   = 20; 
cfg.keeptrials  = 'yes';
cfg.keeptapers  = 'yes';  

then in your BrainStream function:

.. ..

cfg = bs_get_blockvalue(‘freqanalysis’,’cfg’); freq = ft_freqanalysis(cfg,; .. ..

This provides direct access to cfg parameters and different topics with configuration settings can be defined for each type of required FieldTrip analyses. Since Matlab syntax is allowed in the blocksettings assignments, FieldTrip configuration settings for BrainStream are defined in a similar way.

If an experiment consists of several blocks, the topic and key combinations which are the same for all blocks can be put in a common block file. The structure of a common block file is the same as other block files, but settings specified in a common block file will be applied to all blocks. For example, information that is specific to the lab in which the experiment takes place could be put in a common block file. The advantage is that you do not have to specify the same information more than once and if you change the setting in the common block file it will be applied to all blocks.

Referencing to other block files

It is possible to incorporate blocksettings defined in one block file in another. For example, in one block file (block1.blk) you may have specified certain stimulus settings under topic CommonStimulusSettings:

numstim  = 20;  %present 20 stimuli
soa      = 300; %stimulus onset asynchrony is 300 ms
duration = 50;  %stimulus duration is 50 ms

If you want to incorporate the same settings in another block file, you can do this with the @ symbol followed by the name of the block file and the topic you want to include. For example:

@block1.blk [[CommonStimulusSettings|Common Stimulus Settings]]

All keys defined under topic CommonStimulusSettings in block file block1.blk will be added to the StimulusSettings topic.

Note that the block file to which you refer must be in the same folder as where the current processed blockfile is located or on the [[BrainStreamPathsFolders|Matlab search path].

BrainStream project files

References to all block files and common block files can be put together in a single seperate 'BrainStream Project' (.bsp, or .exp for older versions) file with topics Blocks and CommonBlocks. Both topics define the Files key, in which a cell array of block file names defines which block files take part in the experiment. This single file then defines your whole BCI experiment. For example, if your experiment is defined by three functional blocks, namely train.blk, classifier.blk and feedback.blk, and one common block file lab1.blk, the following project file would be sufficient to include them all at once when starting BrainStream:

Files = {'train.blk', ...
         'classifier.blk', ...
Files = {'lab1.blk'}  

Try to prevent using absolute path names and specify them relatively to the folder where the BrainStream project file is located (eg. '../myclassifiers/classifier.blk'). When entire folder structures are copied/moved around, corresponding block files will still be found.