For each voxel of a VMR an unsigned char (1 byte) is stored. If a corresponding V16 (16-bit precision) file is loaded as well, a total of 3 bytes per voxel is occupied by the data.
In their native voxel matrix size (256 x 256 x 256), each VMR uses approx. 16MB of RAM.
Doubling the resolution in each spatial dimension requires 8 times the memory, i.e. 128MB for a fullsize matrix. BrainVoyager gives you the opportunity to remove all non-head portions of the image, so to usually save roughly 70 per cent of the space leading to roughly 35MB to 40MB.
The imaging data in STCs (slice time courses of FMRs) and VTCs is stored as 16-bit integer data, MTCs (surface related mesh time courses) internally make use of 32-bit single precision floats.
For each slice, one STC file is created, covering one acquired slice over time. Hence a functional run with a 64x64 matrix, 30 slices and 250 volumes leads to
2 x 64 x 64 x 30 x 250 bytes = 58.6 MB
The same logic applies here, but due to isometric resampling (the default resolution is 3mm in each spatial dimension) and brain-only volumes, the calculation is slightly different:
2 x 58 x 40 x 46 * 250 bytes = 50.9 MB
If you resample the VTC to 2 x 2 x 2 mm resolution you'll end up with 171.7 MB, and for a 1x1x1mm resolution you even would get to 1.374 GB!
It it hence not advised to resample your functional data any lower than your physical resolution!
Usually, you would take the step to resample the sphere, getting to the standard number of 40,962 vertices per hemisphere. Since the VTC time courses are then resampled in each vertex, both hemispheres would require
4 x 2 x 40962 x 250 bytes = 78.1 MB
Another crucial part where memory comes into play is when GLM statistics are calculated. There are certain approaches that use quite different amounts of RAM. Let's assume we stick to the VTC case with a 3 x 3 x 3 mm resolution. For the multi-study approach we would assume a total of 20 subjects, each having one VTC file (scan) with 25 single study predictors each.
Hence, the number of values stored for one time point (matching the number of values for one statistic) is
58 x 40 x 46 values = 106720 values
which means 213440 bytes for one VTC volume and 426880 bytes for a statistical result volume. For MTCs, the GLM is usually done on a per-hemisphere basis, leading to 40962 values and thus 163848 bytes per surface based map.
For a single-study GLM, for each predictor 2 maps are calculated (a beta map and a map referring to the explained variance). Additionally, three maps are stored, the correlation of predictors with the data, an R value map, and the variance, leading to 53 maps in total. Of course, also the design matrix X and its inverse are stored, but in the single-study case this is negligible...
(2 * 25 + 3) * 4 * 106720 = 21.6 MB for the VTC-based GLM and (2 * 25 + 3) * 4 * 40962 = 8.3 MB for a one-hemisphere MTC-based GLM
There are two approaches, one where you would concatenate the functional data over time, and the other where you separate the predictors along studies and subjects. If you choose to concatenate the runs (after normalizing the data with the appropriate option, of course), the number of maps does not change between single and multi-study GLM, since the number of predictors basically stays the same.
In case you separate the predictors, the number of core predictors calculates as
NrOfCorePredictors = NrOfSubjects * NrOfSingleSubjectPredictors + NrOfTimeCourseFiles
which in this case evaluates to
20 * 25 + 20 = 520 core predictors.
And now, for this number the same calculation as above applies:
(2 * 520 + 3) * 4 * 106720 = 424.1MB for the VTC-based GLM and (2 * 520 + 3) * 4 * 40962 = 163 MB for a one-hemisphere MTC-based GLM
Please keep in mind that the additionally stored design matrix X in this case requires some space as well:
4 * (20 * 25 + 20) * 250 = 508 kB
Although this does not seem much, this number will further increase with the number of subjects and number of predictors. So, eventually, it might become the last straw that breaks the camel's back!
For an RFX GLM, each predictor is estimated as with the fixed effects statistics. However, since the calculation is done on a per-subject basis first (reducing the memory requirements for storing the number of resulting maps per GLM!), the maps are represented differently in the program, allowing to allocate memory in a different way. So, while the resulting GLM file will be of comparable size, also machines failing to compute a FFX GLM might be able to run a RFX GLM without problems!