User Tools

Site Tools


psyc410_s2x:fmri_part2

This is an old revision of the document!


THIS PAGE IS STILL UNDER CONSTRUCTION!

Feel free to poke around, but do not start the lab as things might change!

Lab 8: fMRI Part 2: Preprocessing and Analysis with FSL

Information, Preparation, Resources, Etc.

In the last exercise, you analyzed fMRI data 'by hand' and conducted a simple correlation analysis by fitting an expected activation waveform to each voxel's raw time series. While analyzing by hand was (I hope) instructive, it is not practical for modern studies involving many subjects and several experimental factors. Fortunately, there are powerful programs available to do the heavy lifting for us - programs such as FSL, SPM, AFNI, and Brain Voyager. All of these programs are quite capable and, over the past several years, all have converged upon a very similar feature set. Although there are some minor differences in statistical approach, the choice among these programs is mostly one of taste. I have primarily used AFNI in my own research, but we will use FSL because I believe it offers the best balance of power, flexibility, support, and ease of use.

The lab exercise itself is not as long as this wiki page suggests. There are many choices in running an FSL analysis, and I tried to document the steps you should follow in enough detail so that you would not become frustrated by arcane details. However, I don't want you to simply push the buttons I document below without understanding each step! (a common pitfall with complex statistics programs).

As I've said many times…
Don't rush! Be sure you understand why you're doing what you're doing. It is critical that you understand each step of the preprocessing and analysis stream.

Assigned Readings / Videos:

Goals for this lab:

In today's laboratory exercise, you will use FSL to analyze the same block design data that you analyzed 'by hand' in the last lab. Recall that your data sets are typical for localizer tasks.

In this case, the localizer was used to identify brain regions activated by faces and scenes or by right and left hand movements. Each task is well documented in Lab 7 (Functional MRI Part 1) - so please refer to this lab for details about task design, timing, TRs, etc.

Software introduced in this lab

  • Today's lab will make extensive use of the FSL software from the FMRIB group at Oxford. In particular, we will use FSL's FEAT (FMRI Expert Analysis Tool) graphical user interface. This interface provides simplified access to FSL's powerful command line programs.
  • The documentation for FSL, in general, and for FEAT, in particular, is very extensive. I cannot repeat it all within this page, so I will provide links throughout this wiki page to the relevant FEAT documents. You can find the user guide for FEAT here.

Some of the images used in tonight's lab are from earlier versions of FSL (and/or from FSL running on a Linux machine), so they might not always look identical to what you see when running the program. However, the display should be very similar. I have updated the images in all cases in which the display looks substantially different.

Laboratory Report

Lab Report #8 is due on Apr XXrd @ 1:10 pm.

  • Throughout this (and all) lab exercise pages you will find instructions for your lab reports within these boxes.

Housekeeping

1. Correct the TR contained in the headers of tonight's data

  • Download this script.
    • Click on the link.
    • Select File –> Download
    • The file should now be in your Downloads directory.

2. Move the script into your ~/Desktop/scripts directory.

3. Run the script from Terminal

source ~/Desktop/scripts/correct_TR.sh

This script will run in your terminal for 10-12 minutes

Data used in this lab

  • You will use the same data that you were assigned for last week's lab.
    • For each subject you will have:
      • SUBJ_TASK_RUN.nii.gz - The functional data that you used last week. These EPI data have a resolution of 3.5 x 3.5 x 3.5 mm)
      • SUBJ_highres.nii.gz - T1-weighted anatomical volume acquired at high-resolution (1 x 1 x 1 mm)
      • SUBJ_coplanar.nii.gz - T2-weighted anatomical volume with high-resolution “in plane” and the same slices resolution as the functional data (.085 x .085 x 3.5 mm)

The skull stripped anatomical images will be used to coregister the functional brain volumes in our analyses. You cannot coregister the functional images if the anatomical images have skull outlines, which is why we will skull strip these brains in Part 1 of the lab.

Throughout this wiki you should replace:

  • SUBJ with the ID of your assigned subject
  • TASK with your task name (face or motor)
  • RUN with whatever run you are analyzing (e.g., run02)

If you ever see face, run01, etc. do not mindlessly copy it! You should always be using the filenames that are relevant for YOUR assigned subject and task task (face or motor) and your conditions (face or scene & right or left).

Do not proceed using a brain that does not have good/clean activation.

Let me know if you struggled to find decent task-related activity for your subject and task in Lab 7. If that's the case, we can find you a better one.

Part 1: Preprocessing preparatory for First Level Analysis

Skull Stripping

We will use the skills we developed with FSL/BET earlier in the semester to strip both the coplanar and highres anatomical brains.. Doesn't it feel like just yesterday? <Sigh> Where does the time go?

If you need a skull stripping refresher, have a look at the BET instructions from Lab 4 and/or the BET User guide.

The skull stripped output brains will (by default) be named SUBJ_coplanar_brain.nii.gz and SUBJ_highres_brain.nii.gz. We will need these skull stripped brains for the rest of the exercise, so be careful to specify the correct names.

1. Create a new directory to store the output from this week's lab. =

#!bash
 
# Create the output directory
mkdir ~/Desktop/output/lab08

2. Skull strip the coplanar and anatomical brains.

  • /Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_coplanar.nii.gz
  • /Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_highres.nii.gz

To save yourself a lot of traversing through file manager boxes to find the relevant files, launch fsl from a terminal in which you first change your current directory to a helpful place. For example, you might first cd /Users/hnl/Desktop/ and then start fsl fsl &.

Remember that you can read the unstripped brains from the input folder, but you must specify your output folder for the skull stripped brains. Do NOT use the default output directory. For example:

input:  /Users/hnl/Desktop/input/fmri/loc/data/nifti/2767/2767_coplanar
output: /Users/hnl/Desktop/output/lab08/2767_coplanar_brain

Preprocessing with FEAT

Preprocessing refers to a sequence of data manipulation processes that precede statistical model fitting. These steps prepare the data for statistical analysis by removing extraneous sources of noise and artifact (i.e., increase SNR)

Here are the preprocessing steps we will use this week:

  • skull stripping of both anatomical and functional datasets
    • Note: you already stripped the anatomical data sets in Part 1
    • The 4D fMRI images (i.e., functional data set) will be skull stripped within FEAT
  • motion correction
  • temporal filtering (aka “temporal smoothing”)
  • spatial filtering (aka “spatial smoothing”)
  • slice-time correction

We refer to fMRI data as four dimensional (4D) because in addition to our three spatial dimensions (x, y, z), we have time as the fourth dimension.

To get started, start FSL and then choose the FEAT FMRI analysis option. The following FEAT GUI should appear. Note that the GUI has a tabbed interface with Misc, Data, Pre-Stats, Stats, Post-stats, and Registration. We will enter information on each of these tabs before running the program. For additional details regarding FEAT, or for an alternate explanation of the details provided below, please refer to the FEAT user guide.

At the top of the window you'll see a drop down menu with two options: First Level Analysis and Higher Level Analysis. Depeding on which you choose, some options will become available or unavailable.

A First Level Analysis measures the degree to which

  • the activation time-course of each individual voxel is related to each regressor (i.e., “predictor”).
  • a voxel is differentially affected by one regressor compared to other regressors (a contrast between regressors).

A Higher Level Analyses will

  • summarize results across all of the runs of an individual subject (Second Level Analysis)
  • combine the data across multiple subjects (Third level analysis).

In this part we want the two dropdown menus set to First Level Analysis and Full Analysis

Note that, by default, FEAT has “balloon help”. If you hover the mouse over an option, a pop-up help menu will appear after a few seconds (don't be impatient - it takes a few seconds). This can be very helpful.

FEAT: Data Tab

Detailed information regarding the Data tab can be found here

1. You should first analyze the data from a single run of your data.

  • Set your Number of inputs to 1
  • Press the Select 4D data button.
    • Choose the functional data from your first run: ~/Desktop/input/fmri/loc/data/nifti/SUBJ_TASK_RUN.nii.gz

2. Be sure to specify your Output directory. In FSL, the output directories have a very specific format with a standard file naming convention.

  • You will set yours to: /Users/hnl/Desktop/output/lab08/RUN.
  • Don't forget to replace RUN with the actual run number for the 4D file (e.g., run01).

3. Check parameters

  • The TR should come up automatically as 2.0. Let me know if you see a different value.
  • The Total volumes should come up automatically as 150
    • If either of these values is incorrect, make sure you have loaded the correct dataset.

Remember thatTR is the “repetition time” between excitation pulses. In the context of functional imaging it is the time between brain volumes (e.g., the time to acquire all 32 slices) and is therefore our our temporal resolution. If we have TR = 2s then we get a new data sample from a given voxel every 2 seconds.

We have 150 volumes with TR = 2s. What is the total duration of our time-series?

Remember, a high-pass filter gets rid of low frequency signal. That is, a high-pass filter is like a bouncer that only allows high frequencies to “pass” into the club.

5. The High pass filter cutoff is a temporal smoothing parameter. It can be problematic to have very low frequencies in the data (e.g., linear trends and slow drifts due to magnetic instabilities.) We can eliminate these by high pass filtering.

  • We must be careful not to set our filter too high, or it might remove some of our signal of interest.
  • Let's try a filter with a period of 60 sec.
    • this means we will filter out any signals that are slower than once per 60 seconds, or 1/60 = .016 Hz.

FEAT: Pre-stats Tab

The pre-stats tab controls several preprocessing steps. Detailed information regarding the Pre-stats tab can be found here

1. Motion correction. The default motion correction option is to apply MCFLIRT (motion correction FMRIB Linear Registration Tool). MCFLIRT applies a rigid body transformation (i.e., 6 DoF) to the data. That is, it translates in 3 dimensions and rotates in 3 dimensions, but does not stretch or scale. The presumption is that the brain is the same on each volume, and that a subject's motion can only cause the brain to either translate or rotate, not to stretch or shear.

2. The Slice timing correction option partially corrects for the fact that the data from different slices are collected at different times. That is, each slice is collected individually in a sequential order across the duration of the TR. Our data were collected using an interleaved acquisition, so choose this option.

  • Select Interleaved from the drop-down menul

3. BET brain extraction refers here to the 4D fMRI data, NOT the structural data that you already skull stripped in Part 1.

  • Make sure this is checked.

4. Spatial smoothing FHWM This will blur your data with a Gaussian kernel. Increasing this value will increase the FWHM and thus the amount of smoothness applied to your data. One reason to apply some level of spatial smoothing is to reduce noise through averaging neighboring voxels. Another reason is that it increases the correspondence between different subjects brains.

  • Leave this set to its default value of 5mm.

5. Temporal filtering has several options. Because we selected a high pass filter on the Data tab, make sure the Highpass filtering option is selected here. This option will actually apply the high pass filter that we specified on the Data tab.

There are several other processes that we are not choosing for this dataset. For example, we are not B0 unwarping. This is a process for removing some geometric distortions in the data caused by static variations in the magnetic field over space. While a useful technique, it requires the acquisition of a field map, which we do not have.

FEAT: Registration Tab

The registration tab sets up the coregistration between this subject's structural images and a standard brain. Detailed information regarding the Registration tab can be foundhere.

Recall that we previously normalized an individual subject's brain to a template brain earlier in the semester. Well, here we will do this again.

1. Check the box next to Expanded functional image. The Expanded functional image is a structural image that is coplanar (same slice thickness and orientation) with the functional MRI data.

  • For this subject, this should be the skull stripped output file that you created in Part 1SUBJ_coplanar_brain.nii.gz
  • Select 6 DOF

2. The Main structural image is the high resolution structural image for this subject.

  • For this subject, this should be the skull stripped output file that you created in Part 1SUBJ_highres_brain.nii.gz
  • Select 6 DOF

3. The Standard space is the standard template to which you wish to normalize the data.

  • We will use the MNI 152 brain template brain at 2x2x2 mm resolution (this is the default)
  • Select 12 DOF

During registration:

  1. The low resolution functional MR images are first registered to the low resolution coplanar structural images.
  2. The low resolution coplanar structural images are then coregistered with the same individual's high resolution main structural image.
  3. This high resolution main structural image is then coregistered with the template MNI brain.
  4. One can then derive a registration of the functional MR images to the template MNI brain by combining the transformation matrices (i.e., directions that tell the software how to get from A to B). All of the various transformation matrices are saved for later use.

Do not select Go yet.

You will continue setting up the analysis in the next section.

LAB REPORT Part 1

LAB REPORT Part 1

  1. Provide a flowchart (with text or graphics) of the preprocessing steps used in this analysis.
    1. Be sure to include a brief description of why each step is performed and what effect you expect it to have on your data.

Part 2: Creating a statistical model for a First Level Analysis

Overview of analysis

FSL conceives of an fMRI analysis as consisting of several levels.

  • First Level Analysis is performed on individual runs. So, if your experiment involves repeating a task several times, each task run would be submitted to a first level analysis. In the first level analysis, a multiple regression (General Linear Model-GLM) is performed by fitting your expected activation templates to the raw time-courses of each individual voxel. Thus, you must specify your model (when do you predict activity associated with your task) in the first level analysis. The first level analysis also includes all of the pre-processing steps, such as smoothing, motion correction, slice-time correction, etc.
  • Second Level Analysis is used if you have collected multiple experimental runs on a subject. This analysis is run on the individual first-level analyses. In the second level analysis, you are statistically combining the results of each single run into an across-runs summary. If your experiment is limited to a single subject (unlikely, but possible in some clinical contexts), then the second-level analysis is the last step in your analysis.
  • Third Level Analysis is used to combine multiple experimental runs on multiple subjects. This analysis is run on the individual second-level analyses. In the third-level analysis, you are statistically combining the results of each subject into an across-subjects summary. For many experiments, this is the last step in your analysis.
  • Fourth Level Analsyis is used if you have collected multiple experimental runs on multiple subjects that constitute two or more treatment groups (e.g. drug group and placebo group). This analysis contrasts the third-level results for the different groups. Fourth-level analyses are common in clinical studies that compare, for example, depressed individuals to non-depressed individuals.

In FSL/FEAT, you have a choice in the GUI to choose First Level Analysis or to choose Higher Level Analysis. The Higher Level Analysis choice is used for second-, third-, and fourth-level analyses. Note that the first-level analysis is the only level that applies the statistical model to raw time courses, and the only level in which pre-processing steps are performed. For these reasons, the first-level analysis takes longer to compute. If you get to a higher level analysis and decide to change your statistical model, you have to go back and recompute the first-level analysis again.

FEAT: Stats Tab

The statistical model is specified in the Stats Tab. This is the most complicated part of the process, and the GUI is quite flexible and allows for the specification of complex experimental designs. Happily, our design is quite simple and easy to specify.

You may wish to consult with the FSL FEAT documentation as you read along with my documentation. Details regarding the Stats Tab can be found here.

Recall that last week you specified a simple single template (and then two templates) of the expected activation and we conducted a correlation between that template and the time course of each voxel. Here, instead of using correlation to look for our signal, we will use multiple regression ('general linear model' or GLM). FSL has a brief overview of the GLM approach here.

Stimulus Timing Files

The most important prerequisite for specifying the model is to have an accurate stimulus timing file that specifies when the stimulus occurred (in seconds) relative to the beginning of the fMRI time series of volumes. The file also includes values indicating the stimulus:

  • duration
  • relative weighting

When you made your template by hand last week, you specified when the task was on (1) or off (0) for each TR. Here, we are setting those times in seconds, not TRs.

There needs to be a separate timing file for each experimental factor (called explanatory variables, or EVs, in FSL terminology) used in each run. These do the same job that your template matrices did last week. So, in our face-scene localizer task, we will need two stimulus timing files for each run; one for face and another for scene.

We have used lots of different terms to refer to essential the same thing …

regressor
predictor
explanatory variable (EV)
template

These all refer to the timing of the activation we predict to be caused by our task (or by nuisance regressors such as head motion).

The term model can refer to a single EV, but is often used to refer to the collection of all of our EVs.

The timing file below is for the face condition in the Face Task and for the right hand condition in the Motor Task (these tasks used the same timing and so we can use the same timing file).

30	12	1
78	12	1
126	12	1
174	12	1
222	12	1
270	12	1
  • The three columns represent onset time, duration, and weighting.
    • The onset time is…well…the time the stimulus started.
    • The duration is how long the stimulus was on for (pretty straightforward!)
    • The weighing column is for when we might want to scale our predictors. For example, let's say I play two tones: A and B. And I can play each one at two different volumes: low and high. In that case I might want to set the weighting of the low volume = 1 and for the high volume = 2.
      • For this lab, we will always have the weightings set to 1.
  • Each row represents the start of a new stimulus block (either face perception or right-hand finger tapping)
  • So the first row specifies that our stimulus started at 30 seconds and lasted for 12 seconds, and that we will have a simple weighing of 1. The second block of faces (or right hand tapping) began at 78 seconds and lasted for for 12 seconds.
    • There were six blocks of faces in this first data run.

The timing file below is for the house condition in the Face Task and for the left hand condition in the Motor Task (these tasks used the same timing and so we can use the same timing file).

6 	12	1
54	12	1
102	12	1
150	12	1
198	12	1
246	12	1

Note that the scene and face blocks alternate with a 12 second blank period between the end of one block and the start of the next block. For example, the first scene block begins at 6 s and runs until 18 s (12 s duration), but the first face block does not start until (30 s). In our last lab we used 0s to indicate these timepoints, but here we only indicate the times for the time periods during which we think we can explain the variance, while all others are assumed to be 0s (i.e., unexplained).

Normally, good experimental design would require you to change the stimulus timing for each run to avoid order effects. In setting up these experiments, we purposively did not change the stimulus timing, but rather used the identical timing in each run. This was done to simplify these demonstration analyses.

You can therefore use the same timing files for faces and scenes for each run, instead of needing to create a unique file for each condition in each run.

1. Create your task timing for each explanatory variable (i.e., one for face and one for scene or one for right-hand and one for left-hand) in a separate text file.

  • Use BBEdit to create these text files.
    • Remember, you never want to use a word processor for things like this!
  • Save the timing in files named:
    • face_timing.txt and scene_timing.txt for the Face task
    • right_timing.txt and left_timing.txt for the Motor task
  • Save these timing files in ~/Desktop/output/lab08.

For the rest of the lab I will only refer to the face and scene conditions. If you've been assigned the Motor task then replace face and scene with right and left.

Full Model - EVs

In setting up our statistical regression model, we wish to create 'templates' of the expected activation for the face blocks and for the scene blocks, separately. This is similar to what you did last week 'by hand'. Now, however, you will use the timing file as input and the FEAT program will generate the template based upon that timing. It will also convolve your expected activation template with a hemodynamic response function (HRF) so that the expected activation template has a shape similar to that expected in a real physiological response.

2. To generate your model, begin by clicking on the Full Model Setup button. A small window will appear with a tabbed interface.


  • We have two EVs - faces and scenes - so choose 2 in the Number of Original EVs.
  • For the first EV (tab 1), give it the name Face.
  • For Basic Shape, choose Custom (3 Column format).
  • For Filename, specify the face_timing.txt file (or whatever you named the stimulus timing file).
  • For Convolution, choose Gamma.
    • The Phase, Stddev, and Mean lag of the HRF will be set to 0, 3, and 6, respectively. These values affect the shape and delay of the expected hemodynamic response.
  • Uncheck the Apply temporal derivative option.
  • Check the Apply temporal filtering option.
    • This will apply the same filter to your expected activation template as you applied to your raw fMRI data on the earlier tab.

Repeat this process for Tab 2, except provide the name Scene and specify the scene_timing.txt stimulus timing file. All other options should be the same.

3. Now choose Contrasts & F-tests. This section presupposes some knowledge on the user's part about specifying statistical tests. You can read about this in detail here.

We asking our model to create the following four contrasts:

  1. The BOLD response to faces greater than Baseline.
  2. The BOLD response to scenes greater than Baseline.
  3. The Face response greater than the Scene response.
  4. The Scene response greater than the FACE response.
Title EV1 EV2
Face 1 0
Scene 0 1
Face > Scene 1 -1
Scene > Face -1 1

When you are done, click the Done button. A window will popup showing you your design. By convention, your two expected activation templates will be shown vertically, rather than horizontally. It should look similar to the one shown below.



Do not select Go yet.

You will continue setting up the analysis in the next section.

LAB REPORT Part 2

LAB REPORT Part 2

  1. Include a figure showing your model design.
    1. You can either take a screenshot for find this image in your output directory named design.png
  2. Why do we convolve our model with an approximate hemodynamic response function?
    1. What benefit does this have over simply time-shifting our box-car model (as you did in the last lab)?

Part 3: Post-statistics significance testing

There are several methods offered by FSL/FEAT for testing the significance of the statistical model. Students should be aware that, in FSL, the higher level analyses use the full range of statistics and variances from the lower level analyses. That is, FSL does not 'pass up' thresholded statistics to the next level of analysis. However, when you decide to review the significance of your model, at any level, you may likely want to correct for the number of statistical comparisons you performed.

Multiple Comparisons

The multiple comparisons problem was described in the last lab and in class. It is important that you understand this problem as it comes up frequently in imaging research (where there can be tens of thousands of voxels, and each is treated as a dependent variable). It also comes up in many other areas of research, such as genetics, where many thousands of gene variations are regressed against thousands of phenotypes).

  • Here is a link to a brief discussion of the multiple comparison problem.
  • Wikipedia also has a very nice general discussion of the multiple comparisons problem, and ends with links to additional helpful articles.
  • This link points to a discussion of various methods for correcting for multiple comparisons frequently used in fMRI research.
  • This link points to a tutorial on using Gaussian Random Field Theory as an alternative to Bonferroni correction in smooth images.

We will set our method for correcting for multiple comparisons in the Thresholding section of the Post-stats tab. It is nearly impossible to publish results that have not been corrected for multiple comparisons. However, the very fact that there are choices in the method applied suggests that the field is not in agreement upon how to do this. The post-stat tab offers three choices for post-stats:

  1. None will show the statistical value (z-value) for each and every voxel.
  2. Uncorrected will only show voxels above a specified z-value, but will not correct for multiple comparisons.
  3. Voxel will correct for multiple comparisons using Gaussian Random Field Theory.
  4. Cluster will correct for multiple comparisons based on the number of contiguous “activated” voxels in a cluster.

For your first level analyses, it doesn't really matter what correction you apply (or, even no correction) because you will be combining the results of your two runs into a second level analysis. As I've mentioned elsewhere on this wiki page, the full statistics from lower level analysis are passed upward for higher level analyses. When you run the second level analysis, you'll compare the different corrections for multiple comparisons.

For our first-level analysis let's be very liberal and set Thresholding to Uncorrected with a P threshold of 0.05.

LAB REPORT Part 3

LAB REPORT Part 3

  1. There are no questions for this part.

Part 4: Running the First Level Analysis

Run the first functional run

1. Once you have entered all of the required information into the FEAT GUI, you should save the file you have created to disk.

  • Select the Save button on the bottom of the Data tab.
  • Make sure to give it a name you'll remember (e.g., SUBJ_run01_preproc).
  • FSL will append an .fsf file extension to your designated name.

2. And now for the moment of truth. Pretty exciting, right!? Don't lie; you know you're excited.

  • Click on the Go button.
    • If you have entered everything correctly, FSL will start processing according to the parameters you entered.

FSL creates an HTML file (report.html) in your designated output directory. The processing progress will be written into this HTML file. Normally this report.html will automatically open in a web browser and you can watch the progress in real-time.

It will take some time to complete processing (10-12 minutes). While running, the words Still Running appear in red font on your HTML log page.

Run the second functional run

While FSL is working (assuming that it is running correctly) on run 01, you can start setting up your First Level Analysis for the second and third functional runs for your participant.

1. Set up FEAT for run02

  1. In the Data tab select the input file for run02 (via Select 4D data)
  2. Set your output directory
    1. (of course this should now be run02)
  3. Save your setup file for run02 as you did for run01

All other parameters are still set correctly from run01, so you don't have to change anything on any of the other tabs.

2. Press Go

  • You can do this even if your first analysis is still running, though it might slow down your computer quite a bit.

Run the third functional run

If you are analyzing data from the Face task, your subject will have a third run of data. Go ahead and repeat the steps above to run First Level Analsyis for run03. Subjects in the Motor Task only have two runs of data.

While you wait for your two (or three) runs of data to finish processing, you should read on through the wiki to preview what steps are coming up next.

Run analysis on data that has not been preprocessed

The last first-level analysis we will run will be on data that does not get preprocessed. In your open FEAT window make the following changes:

  • Data tab
    • select the run01 data for your task
    • set the output directory to /Users/hnl/Desktop/output/lab08/run1_nopreproc
  • Pre-stats tab
    • Set Motion correction to None
    • Set Slice timing correction to None
    • Set spatial smoothing to 0.0
    • For temporal filtering, unselect Highpass
  • Stats tab
    • On the EVs 1 tab, unselect Apply temporal filtering
    • Do the same for the EV 2 tab
  • Everything else can stay the same
  • Go

Check for Errors on your Report Log HTML Page

As long your HTML log file is not printing pages of error messages, you should be fine. However, if you do observe error messages in your HTML log, read them carefully. Errors will almost certainly be due to an incorrect specification of an input file (wrong name, wrong path, etc.), or because you are trying to write output to a “read only” folder. Check carefully and try to solve these errors on your own before calling me to help. Solving mundane technical problems is a much bigger part of science (at least cognitive neuroscience) than we care to admit!

LAB REPORT Part 4

LAB REPORT Part 4

  1. There are no questions for this part.

Part 5: Exploring your results

FEAT Output

FEAT will create a directory with whatever output name you specified and the suffix .feat. So if you specified ~/Desktop/output/lab08/run01 as your output directory, FEAT will create the directory ~/Desktop/output/lab08/run01.feat.

Inside this directory there will be a file named results.html that will contain a log of activity and results. This file will be displayed as a web page in your internet browser.

The directory will also contain lots of other files and subdirectories. A full list can be found here. For this exercise we will simply look at the results.html file, but in a later lab we will learn how to investigate the output interactively with FSLeyes.

Review the Results on the HTML Page

For this section look at the output for any of your first-level analysis except for run01_noprepoc. We'll look at that one later.

You can begin to review your results as soon as the HTML output indicates that the first level analysis is complete.

There is a lot of information in the HTML file. To get you jump-started, click on the Pre-stats tab and you'll see the motion correction estimates that we discussed in lecture.

Now click on Post_stats and scroll down to see the output for the four contrasts that you specified.

  • zstat1 - C1(Face)
    • The first array of brain slices shows the results of your first contrast (either face or right hand, depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block).
      • In other words, this shows all the voxels in which the condition predicted the brain activity at p < .05. Importantly, this does not mean the voxel was selectively activated by faces (or right-hand) because, for example, maybe a voxel is just responding to visual stimulations vs no visual stimulation.
  • zstat2 - C2(Scene)
    • The second array of brain slices shows the results of your second contrast (either scene or left hand, depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block).
      • In other words, this shows all the voxels in which the condition predicted the brain activity at p < .05. Importantly, this does not mean the voxel was selectively activated by scenes (or left-hand) because, for example, maybe a voxel is just responding to visual stimulations vs no visual stimulation.

Note that you can have the same voxels activated in both of these first two contrasts. For example, you might expect that visual cortex is activated by both faces and scenes, and so those visual cortex voxels should be activated in both of the first two contrasts.

  • zstat3 - C3(Face > Scene) shows voxels that are activated more by faces than by scenes.
  • We've subtracted out all of the activation that was in common between faces and scenes (e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to faces.
  • zstat4 - C4(Scene > Face) shows voxels that are activated more by scenes than by faces.
  • We've subtracted out all of the activation that was in common between faces and scenes (e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to scenes.

Unlike the first two contrasts, there should be no voxels will be in common between these latter two contrasts.

It will be more informative to fully investigate your activation results after you complete the Second Level Analysis. So, look at your results here in the HTML output, visually compare the activation patterns in the first and second runs to get a sense of the consistency, and then move on.

LAB REPORT Part 5

LAB REPORT Part 5

  1. Include a figure of your run 1 contrast 3 results taken from your html output file.
    1. What do these results show? (I'm looking for a general description of the contrast more than a detailed analysis fo the specific activation pattern. In other words, what question could contrast 3 answer that the other contrasts could not.)</WRAP>

Part 6: What did preprocessing do to your input data?

Several pre-processing steps were included in the first level analysis. Were they effective? Let's do a direct comparison of our raw data and the preprocessed data using FSLeyes to see the effects of preprocessing. Some will be obvious, whereas others are more subtle.

1. Launch FSLeyes.

2. Load two files:

  • your raw data set
    • e.g., ~/Desktop/input/fmri/loc/data/nifti/SUBJ/SUBJ_face_run1.nii.gz
  • your preprocessed data set
    • e.g., ~/Desktop/output/lab08/run01/filtered_func_data.nii.gz

Of course, replace SUBJ and face with the appropriate subject ID and task.

3. Open a second viewer to view the files side by side

  • ViewOrtho View
    • In the left window, deselect the filtered_func_data.nii.gz data by deselecting the blue eye.
    • In the right window, deselect the SUBJ_face_run1.nii.gz data by deselecting the blue eye next to it.
    • So you should have the raw data displayed in the left windows, and your preprocessed data displayed in the right windows:

Notice that when you click at a location on one of the brains, the cursor will jump to the same location in the other brain.

4. Display the raw and preprocessed data sets' time-series.

  • Press command + 3
  • This will only turn on the time-series from one of the brains. Turn on the other one by selecting the grey eye in the Overlay list (the list in the Time series window, not in the Ortho View windows.
  • Select Normalised from the Plotting Mode drop down list.

5. Turn off the modeled time-series.

  • You should now see three different waveforms: the raw voxel time-series, the pre-processed time-series, and the fitted model. Let's turn off the fitted model. To do so:
    • Highlight the filtered_func_data in the Overlay list
    • Select the wrench icon in the Time Series window.
    • Unselect Plot full model fit from the FEAT settings
      • In the image below, the Plot full model fit has not yet been unchecked.

6. Compare the raw time-series with the preprocessed time-series. You might want to try this at a few different voxels.

  • Can you see differences in the time-series?
  • The easiest difference to identify 'by eye' is the impact of the highpass filter.
  • We'll have another look at the effect of preprocessing in the next section.

LAB REPORT Part 6

LAB REPORT Part 6

  1. There are no questions for this part.

Part 7: How well did your model account for you activation results?

As discussed in the lecture, you should examine your residuals to see how well your model accounts for the time-course of your activations. The residual of a regression is considered error variance, and the Least Squares approach used in regression analysis seeks to minimize the error variance.

You want the ratio of explained variance to unexplained variance to be as large as possible. The residuals represent the unexplained variance and therefore you want them as small as possible.

1. Load the following files into fsleyes:

  • run01.feat/filtered_func_data.nii.gz
  • run01.feat/thresh_zstat1.nii.gz
    • Change the color from Greyscale to Red-Yellow
    • You should now see “active” voxels overlaid on the brain.

2. Click on a strongly activated voxel (colored yellow).

3. View the time-series

  • Press command + 3
  • Highlight filtered_func_data
  • Click on the wrench icon
  • In the FEAT options…
    • Select Plot residuals
    • Unselect everything else

3. Look at the residual for that voxel and judge whether the activation waveshape is largely absent. If it is, it means that your model successfully accounted for the activation, and there is no task-related activation left in the residual error term.

4. Load the data that was analyzed with preprocessing.

  • Add run01_nopreproc/filtered_func_data.nii.gz to your display.
  • In the Time series window, change the settings to only show the residuals (as you just did above)

It will be best to view these residual timeseries with the Plotting mode set to Normal or Demeaned.

5. You should now see the residuals from each of the two data sets. Remember, these are identical data sets that were analyzed with the exact same model. The only difference was that one was preprocessed and the other was not. Do you observe larger residuals (i.e., more unexplained variance) for the data that was not preprocessed?

LAB REPORT Part 7

LAB REPORT Part 7

  1. Include a figure showing a residuals time-series comparison from a voxel that you think clearly highlights the benefit(s) of preprocessing. That is a voxel in which you believe the residuals after preprocessing are smaller than the residuals without preprocessing.
    1. Specifically, what do you observe that motivated you to choose this voxel?
psyc410_s2x/fmri_part2.1743954822.txt.gz · Last modified: 2025/04/06 10:53 by admin

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
CC Attribution-Share Alike 4.0 International Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki