User Tools

Site Tools


psyc410_s2x:fmri_part2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
psyc410_s2x:fmri_part2 [2025/04/06 10:23] – [Full Model - EVs] adminpsyc410_s2x:fmri_part2 [2025/04/06 11:05] (current) – removed admin
Line 1: Line 1:
-<WRAP center round todo 60%> 
-<WRAP centeralign> 
-<typo fs:36px; fc:red; fw: bold; fv:small-caps; ls:1px; lh:1.1>**THIS PAGE IS STILL UNDER CONSTRUCTION!**</typo> 
- 
-<typo fs:20px; fc:black; fw: bold>Feel free to poke around, but do not start the lab as things might change! </typo> 
-</WRAP> 
-</WRAP> 
- 
-<WRAP centeralign> 
-<typo ff:'Georgia'; fs:36px; fc:purple; fw: bold; fv:small-caps; ls:1px; lh:1.1> 
-Lab 8: fMRI Part 2: Preprocessing and Analysis with FSL </typo> 
-</WRAP> 
- 
-====== Information, Preparation, Resources, Etc. ====== 
- 
-In the last exercise, you analyzed fMRI data 'by hand' and conducted a simple correlation analysis by fitting an expected activation waveform to each voxel's raw time series. While analyzing by hand was (I hope) instructive, it is not practical for modern studies involving many subjects and several experimental factors. Fortunately, there are powerful programs available to do the heavy lifting for us - programs such as [[http://www.fmrib.ox.ac.uk/fsl/|FSL]], [[http://www.fil.ion.ucl.ac.uk/spm/|SPM]], [[http://afni.nimh.nih.gov/afni|AFNI]], and [[http://www.brainvoyager.com|Brain Voyager]]. All of these programs are quite capable and, over the past several years, all have converged upon a very similar feature set. Although there are some minor differences in statistical approach, the choice among these programs is mostly one of taste. I have primarily used [[http://afni.nimh.nih.gov/afni|AFNI]] in my own research, but we will use [[http://www.fmrib.ox.ac.uk/fsl/|FSL]] because I believe it offers the best balance of power, flexibility, support, and ease of use. 
- 
-The lab exercise itself is not as long as this wiki page suggests. There are many choices in running an FSL analysis, and I tried to document the steps you should follow in enough detail so that you would not become frustrated by arcane details. <wrap em>However, I don't want you to simply push the buttons I document below without understanding each step! </wrap>(a common pitfall with complex statistics programs). 
- 
-<WRAP center round alert 70%> 
-As I've said many times... \\ 
-**Don't rush! Be sure you understand //why// you're doing what you're doing. It is critical that you understand each step of the preprocessing and analysis stream**.  
-</WRAP> 
- 
-/* 
-To help guide you through these analyses, I encourage you to make use of the FSL course materials that are found [[http://www.fmrib.ox.ac.uk/fslcourse/|here]]. The relevant material is located under the heading //FMRI Preprocessing and Model-Based Analysis  (FEAT)//. Lecture 1 is particularly relevant for today's class. 
-*/ 
-===== Assigned Readings / Videos: ===== 
- 
-  *  
- 
- 
-/* 
-<WRAP centeralign>//__COMPLETE READING PRIOR TO March XX__//</WRAP> 
- 
-  * {{ :psyc410:documents:essentials_of_fmri.pdf | Wager & Lindquist, 2011.}} Essentials of functional magnetic resonance imaging. 
-*/ 
-===== Goals for this lab: ===== 
- 
-In today's laboratory exercise, you will use FSL to analyze the same **[[psyc410_s25:fmri_part1#data_used_in_this_lab|block design]]** data that you analyzed 'by hand' in the last lab. Recall that your data sets are typical for **[[psyc410_s25:fmri_part1#data_for_part_2_and_beyond|localizer tasks]]**.  
-\\ 
-\\ 
-In this case, the localizer was used to identify brain regions activated by faces and scenes or by right and left hand movements. Each task is well documented in [[:psyc410_s25:fmri_part1#task_design_-_part_2|Lab 7 (Functional MRI Part 1)]] - so please refer to this lab for details about task design, timing, TRs, etc. 
- 
- 
-===== Software introduced in this lab ===== 
- 
-  * Today's lab will make extensive use of the FSL software from the FMRIB group at Oxford. In particular, we will use FSL's ''FEAT'' (**F**MRI **E**xpert **A**nalysis **T**ool) graphical user interface. This interface provides simplified access to FSL's powerful command line programs. 
-  * The documentation for FSL, in general, and for FEAT, in particular, is very extensive. I cannot repeat it all within this page, so I will provide links throughout this wiki page to the relevant FEAT documents. You can find the user guide for FEAT [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide|here]]. 
- 
-<WRAP center round info 90%> 
-Some of the images used in tonight's lab are from earlier versions of FSL (and/or from FSL running on a Linux machine), so they might not always look identical to what you see when running the program. However, the display should be very similar. I have updated the images in all cases in which the display looks substantially different. 
-</WRAP> 
-===== Laboratory Report ===== 
-<WRAP center round important 70%> 
-<WRAP centeralign><wrap em>Lab Report #8 is due on Apr XX<sup>rd</sup> @ 1:10 pm. </wrap></WRAP> 
-  * Throughout this (and all) lab exercise pages you will find instructions for your lab reports within these boxes. 
- 
-</WRAP> 
- 
- 
- 
- 
-===== Housekeeping ===== 
- 
-<WRAP center round todo 70%> 
-**1. ** Correct the TR contained in the headers of tonight's data 
-  * Download [[https://www.dropbox.com/scl/fi/vw8sb3micdxciljbanxyw/correct_TR.sh?rlkey=11rldezu5xgka33ha7jpets8x&dl=0|this script]]. 
-    * Click on the link. 
-    * Select ''File'' --> ''Download'' 
-    * The file should now be in your ''Downloads'' directory. 
- 
-**2. ** Move the script into your ''~/Desktop/scripts'' directory. 
- 
-**3. ** Run the script from ''Terminal'' 
-<code bash> 
-source ~/Desktop/scripts/correct_TR.sh 
-</code> 
- 
-<WRAP centeralign><wrap em> **This script will run in your terminal for 10-12 minutes** </wrap></WRAP> 
-</WRAP> 
- 
- 
- 
- 
-===== Data used in this lab ===== 
- 
-  * You will use the same data that [[:psyc410_s25:fmri_part1#data_file_names_and_locations|you were assigned]] for last week's lab. 
-    * For each subject you will have: 
-      * ''SUBJ''_''TASK''_''RUN''.nii.gz - The functional data that you used last week. These EPI data have a resolution of 3.5 x 3.5 x 3.5 mm) 
-      * ''SUBJ_highres.nii.gz'' - T1-weighted anatomical volume acquired at high-resolution (1 x 1 x 1 mm) 
-      * ''SUBJ_coplanar.nii.gz'' - T2-weighted anatomical volume with high-resolution "in plane" and the same slices resolution as the functional data (.085 x .085 x 3.5 mm) 
- 
-The skull stripped anatomical images will be used to //coregister// the functional brain volumes in our analyses. You cannot coregister the functional images if the anatomical images have skull outlines, which is why we will skull strip these brains in Part 1 of the lab. 
- 
-<WRAP center round info 90%> 
-Throughout this wiki you should replace: 
- 
-  * **''SUBJ''** with the ID of your assigned subject 
-  * **''TASK''**  with your task name (''face'' or ''motor'') 
-  * **''RUN''** with whatever run you are analyzing (e.g., ''run02'') 
- 
-If you ever see ''face'', ''run01'', etc. do not mindlessly copy it! **You should always be using the filenames that are relevant for YOUR assigned subject and task task** (''face'' or ''motor'') and your conditions (''face'' or ''scene'' & ''right'' or ''left''). 
- 
-</WRAP> 
- 
-<WRAP center round alert 90%> 
- 
-<WRAP centeralign><wrap em>Do not proceed using a brain that does not have good/clean activation.</wrap> 
- 
-**Let me know if you struggled to find decent task-related activity for your subject and task in Lab 7**. If that's the case, we can find you a better one.  
- 
-</WRAP> 
-</WRAP> 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
-====== Part 1: Preprocessing preparatory for First Level Analysis ====== 
- 
-===== Skull Stripping ===== 
-We will use the skills we developed with ''FSL/BET'' earlier in the semester to strip both the coplanar and highres anatomical brains.. Doesn't it feel like just yesterday? <Sigh> Where does the time go? 
- 
-<WRAP center round tip 90%> 
-If you need a skull stripping refresher, have a look at the BET instructions from [[:psyc410_s25:brain_extraction_segmentation#part_3skull_stripping_your_test_brain_using_fsl_bet|Lab 4 ]] and/or the [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/structural/bet|BET User guide.]]  
-</WRAP> 
- 
-The skull stripped output brains will (by default) be named ''SUBJ_coplanar_brain.nii.gz'' and ''SUBJ_highres_brain.nii.gz'' We will need these skull stripped brains for the rest of the exercise, so be careful to specify the correct names. 
- 
-**1.** Create a new directory to store the output from this week's lab. 
-= 
-<code bash> 
-#!bash 
- 
-# Create the output directory 
-mkdir ~/Desktop/output/lab08 
-</code> 
- 
-**2.** Skull strip the coplanar and anatomical brains. 
-  * ''/Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_coplanar.nii.gz'' 
-  * ''/Users/hnl/Desktop/input/fmri/loc/data/nifti/SUBJ_highres.nii.gz'' 
- 
-<WRAP center round tip 90%> 
-To save yourself a lot of traversing through file manager boxes to find the relevant files, launch ''fsl'' from a terminal in which you first change your current directory to a helpful place. For example, you might first ''cd /Users/hnl/Desktop/'' and then start fsl ''fsl &''. 
-</WRAP> 
- 
-{{ :psyc410:images:bet01.png }} 
- 
-<WRAP center round alert 90%> 
-Remember that you can read the unstripped brains from the ''input'' folder, **but you must specify your ''output'' folder** for the skull stripped brains. <wrap em>Do NOT use the default output directory.</wrap>  For example: 
- 
-  input:  /Users/hnl/Desktop/input/fmri/loc/data/nifti/2767/2767_coplanar 
-  output: /Users/hnl/Desktop/output/lab08/2767_coplanar_brain 
-</WRAP> 
- 
-===== Preprocessing with FEAT ===== 
-**//Preprocessing//** refers to a sequence of data manipulation processes that precede statistical model fitting. These steps //prepare// the data for statistical analysis by removing extraneous sources of noise and artifact (i.e., increase SNR) 
- 
-Here are the preprocessing steps we will use this week: 
-  * **skull stripping** of both anatomical and functional datasets 
-    * Note: you already stripped the anatomical data sets in Part 1 
-    * The 4D fMRI images (i.e., functional data set) will be skull stripped within ''FEAT'' 
-  * **motion correction** 
-  * **temporal filtering** (aka "temporal smoothing") 
-  * **spatial filtering** (aka "spatial smoothing") 
-  * **slice-time correction** 
- 
-<WRAP center round tip 60%> 
-We refer to fMRI data as four dimensional (4D) because in addition to our three spatial dimensions (x, y, z), we have time as the fourth dimension. 
-</WRAP> 
- 
- 
-To get started, start FSL and then choose the ''FEAT FMRI analysis'' option. The following FEAT GUI should appear. Note that the GUI has a tabbed interface with ''Misc'', ''Data'', ''Pre-Stats'', ''Stats'', ''Post-stats'', and ''Registration''. **<fc #ff0000>We will enter information on each of these tabs before running the program</fc>**. For additional details regarding FEAT, or for an alternate explanation of the details provided below, please refer to the [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide|FEAT user guide]]. 
-  
-{{ :psyc410:images:Feat01.png }} 
- 
-At the top of the window you'll see a drop down menu with two options: ''First Level Analysis'' and ''Higher Level Analysis''. Depeding on which you choose, some options will become available or unavailable.   
- 
-A **First Level Analysis** measures the degree to which 
-  * the activation time-course of each individual voxel is related to each regressor (i.e., "predictor"). 
-  * a voxel is differentially affected by  one regressor compared to other regressors (a //contrast// between regressors). 
- 
-A **Higher Level Analyses** will  
-  * summarize results across all of the runs of an individual subject (//Second Level Analysis//) 
-  * combine the data across multiple subjects (//Third level analysis//). 
-  
-In this part we want the two dropdown menus set to ''First Level Analysis'' and ''Full Analysis'' 
- 
-<WRAP center round tip 90%> 
-Note that, by default, FEAT has "balloon help". If you hover the mouse over an option, a pop-up help menu will appear after a few seconds (don't be impatient - it takes a few seconds). This can be very helpful.  
-</WRAP> 
- 
-===== FEAT: Data Tab ===== 
- Detailed information regarding the ''Data'' tab can be found [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide?id=the-data-tab|here]] 
- 
-**1.** You should first analyze the data from a single run of your data. 
-  * Set your ''Number of inputs'' to ''1'' 
-  * Press the **''Select 4D data''** button.  
-    * Choose the functional data from your first run: ''~/Desktop/input/fmri/loc/data/nifti/SUBJ_TASK_RUN.nii.gz'' 
- 
-{{ :psyc410:images:feat_level1_select_data.png }} 
- 
-**2.** Be sure to specify your **''Output directory''**. In FSL, the output directories have a very specific format with a standard file naming convention.  
-  * You will set yours to: ''/Users/hnl/Desktop/output/lab08/RUN''. 
-  * Don't forget to replace ''RUN'' with the actual run number for the 4D file (e.g., ''run01''). 
- 
-**3.** Check parameters 
-  * The **TR** should come up automatically as ''2.0''. Let me know if you see a different value. 
-  * The **Total volumes** should come up automatically as ''150'' 
-    * If either of these values is incorrect, make sure you have loaded the correct dataset. 
- 
-<WRAP center round info 90%> 
-Remember that''TR'' is the "repetition time" between excitation pulses. In the context of functional imaging it is the time between brain volumes (e.g., the time to acquire all 32 slices) and is therefore our our temporal resolution. If we have TR = 2s then we get a new data sample from a given voxel every 2 seconds. 
-</WRAP> 
- 
-<WRAP center round help 90%> 
-We have 150 volumes with TR = 2s. What is the total duration of our time-series? 
-</WRAP> 
- 
-<WRAP center round tip 90%> 
-Remember, a high-pass filter gets rid of low frequency signal. That is, a high-pass filter is like a bouncer that only allows high frequencies to "pass" into the club. 
-</WRAP> 
- 
- 
-**5.** The **''High pass filter cutoff''** is a temporal smoothing parameter. It can be problematic to have very low frequencies in the data (e.g., linear trends and slow drifts due to magnetic instabilities.) We can eliminate these by high pass filtering. 
-  * We must be careful not to set our filter too high, or it might remove some of our signal of interest. 
-  * Let's try a filter with a period of ''60'' sec. 
-    * this means we will filter out any signals that are slower than once per 60 seconds, or 1/60 = .016 Hz. 
- 
-{{ :psyc410:images:feat_level1_setup.png }} 
- 
- 
-===== FEAT: Pre-stats Tab ===== 
-The pre-stats tab controls several preprocessing steps. Detailed information regarding the ''Pre-stats'' tab can be found [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide?id=the-pre-stats-tab|here]] 
- 
-**1.** **''Motion correction''**. The default motion correction option is to apply ''MCFLIRT'' (**m**otion **c**orrection **F**MRIB **L**inear **R**egistration **T**ool). MCFLIRT applies a //rigid body transformation// (i.e., 6 DoF) to the data. That is, it translates in 3 dimensions and rotates in 3 dimensions, but does not stretch or scale. The presumption is that the brain is the same on each volume, and that a subject's motion can only cause the brain to either translate or rotate, not to stretch or shear. 
- 
-**2.** The **''Slice timing correction''** option partially corrects for the fact that the data from different slices are collected at different times. That is, each slice is collected individually in a sequential order across the duration of the TR. Our data were collected using an ''interleaved acquisition'', so choose this option. 
-  * Select ''Interleaved'' from the drop-down menul 
- 
-**3.** **''BET brain extraction''** refers here to the 4D fMRI data, NOT the structural data that you already skull stripped in Part 1. 
-  * Make sure this is checked. 
- 
-**4.** **''Spatial smoothing FHWM''** This will blur your data with a Gaussian kernel. Increasing this value will increase the FWHM and thus the amount of smoothness applied to your data. One reason to apply some level of spatial smoothing is to reduce noise through averaging neighboring voxels. Another reason is that it increases the correspondence between different subjects brains. 
-  * Leave this set to its default value of ''5mm'' 
- 
-**5.** **''Temporal filtering''** has several options.  Because we selected a high pass filter on the ''Data'' tab, make sure the ''Highpass'' filtering option is selected here. This option will actually apply the high pass filter that we specified on the ''Data'' tab. 
- 
-There are several other processes that we are not choosing for this dataset. For example, we are not **''B<sub>0</sub> unwarping''**. This is a process for removing some geometric distortions in the data caused by static variations in the magnetic field over space. While a useful technique, it requires the acquisition of a field map, which we do not have. 
- 
-===== FEAT: Registration Tab ===== 
- 
-The registration tab sets up the coregistration between this subject's structural images and a standard brain. Detailed information regarding the Registration tab can be found[[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide?id=the-registration-tab|here]]. 
- 
-Recall that we previously normalized an individual subject's brain to a template brain [[:psyc410_s25:brain_registration_atlases#part_2transforming_brains_into_a_common_space_using_flirt|earlier in the semester]]. Well, here we will do this again. 
- 
-**1.** Check the box next to ''Expanded functional image''. 
-The **''Expanded functional image''** is a structural image that is coplanar (same slice thickness and orientation) with the functional MRI data. 
-  * For this subject, this should be the **skull stripped** output file that you created in [[#Skull Stripping|Part 1]] -- ''SUBJ_coplanar_brain.nii.gz'' 
-  * Select ''6 DOF'' 
- 
-**2.** The **''Main structural image''** is the high resolution structural image for this subject. 
-  * For this subject, this should be the **skull stripped** output file that you created in [[#Skull Stripping|Part 1]] -- ''SUBJ_highres_brain.nii.gz'' 
-  * Select ''6 DOF'' 
- 
-**3.** The **''Standard space''** is the standard template to which you wish to normalize the data. 
-  * We will use the MNI 152 brain template brain at 2x2x2 mm resolution (this is the default) 
-  * Select ''12 DOF'' 
- 
-{{ :psyc410:images:feat_reg.png?nolink&500 |}} 
- 
-During registration: 
-  - The low resolution functional MR images are first registered to the low resolution coplanar structural images. 
-  - The low resolution coplanar structural images are then coregistered with the same individual's high resolution main structural image. 
-  - This high resolution main structural image is then coregistered with the template MNI brain. 
-  - One can then derive a registration of the functional MR images to the template MNI brain by combining the transformation matrices (i.e., directions that tell the software how to get from A to B). All of the various transformation matrices are saved for later use. 
- 
-<WRAP center round alert 60%> 
-<WRAP centeralign>**__Do not select ''Go'' yet__**.</WRAP> 
-<WRAP centeralign>You will continue setting up the analysis in the next section.</WRAP> 
-</WRAP> 
- 
-===== LAB REPORT Part 1 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 1 
-</typo> 
-</WRAP></WRAP> 
- 
-  - Provide a flowchart (with text or graphics) of the preprocessing steps used in this analysis. 
-    - Be sure to include a brief description of why each step is performed and what effect you expect it to have on your data. 
-</WRAP> 
- 
-====== Part 2: Creating a statistical model for a First Level Analysis ====== 
- 
-===== Overview of analysis ===== 
- 
-/*A detailed description of multi-level analysis in FSL can be found [[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide#Group_Statistics|here]].*/ 
- 
-FSL conceives of an fMRI analysis as consisting of several **levels**. 
- 
-  * **__First Level Analysis__** is performed on individual runs. So, if your experiment involves repeating a task several times, each task run would be submitted to a first level analysis. In the first level analysis, a multiple regression (**G**eneral **L**inear **M**odel-GLM) is performed by fitting your expected activation templates to the raw time-courses of each individual voxel. Thus, you must specify your model (when do you predict activity associated with your task) in the first level analysis. The first level analysis also includes all of the pre-processing steps, such as smoothing, motion correction, slice-time correction, etc. 
- 
-  * **__Second Level Analysis__** is used if you have collected multiple experimental runs on a subject. This analysis is run on the individual first-level analyses. In the second level analysis, you are statistically combining the results of each single run into an //across-runs// summary. If your experiment is limited to a single subject (unlikely, but possible in some clinical contexts), then the second-level analysis is the last step in your analysis. 
- 
-  * **__Third Level Analysis__** is used to combine multiple experimental runs on multiple subjects. This analysis is run on the individual second-level analyses. In the third-level analysis, you are statistically combining the results of each subject into an //across-subjects// summary. For many experiments, this is the last step in your analysis. 
- 
-  * **__Fourth Level Analsyis__** is used if you have collected multiple experimental runs on multiple subjects that constitute two or more treatment groups (e.g. drug group and placebo group). This analysis contrasts the third-level results for the different groups. Fourth-level analyses are common in clinical studies that compare, for example, depressed individuals to non-depressed individuals. 
- 
-In FSL/FEAT, you have a choice in the GUI to choose ''First Level Analysis'' or to choose ''Higher Level Analysis''. The Higher Level Analysis choice is used for second-, third-, and fourth-level analyses. Note that the first-level analysis is the only level that applies the statistical model to raw time courses, and the only level in which pre-processing steps are performed. For these reasons, the first-level analysis takes longer to compute. If you get to a higher level analysis and decide to change your statistical model, you have to go back and recompute the first-level analysis again. 
- 
-===== FEAT: Stats Tab ===== 
- 
-The statistical model is specified in the Stats Tab. This is the most complicated part of the process, and the GUI is quite flexible and allows for the specification of complex experimental designs. Happily, our design is quite simple and easy to specify. 
- 
-You may wish to consult with the FSL FEAT documentation as you read along with my documentation. Details regarding the Stats Tab can be found [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide?id=the-stats-tab|here]]. 
- 
-{{ psyc410:images:feat_stats01.png }} 
- 
-Recall that last week you specified a simple single template (and then two templates) of the expected activation and we conducted a correlation between that template and the time course of each voxel. Here, instead of using correlation to look for our signal, we will use multiple regression ('general linear model' or GLM). FSL has a brief overview of the GLM approach [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/overview_of_glm_analysis|here]]. 
- 
-==== Stimulus Timing Files ==== 
- 
-The most important prerequisite for specifying the model is to have an accurate stimulus timing file that specifies **when the stimulus occurred (in seconds) relative to the beginning of the fMRI time series of volumes**. The file also includes values indicating the stimulus: 
-  * duration 
-  * relative weighting 
- 
-<WRAP center round alert 70%> 
-When you made your template by hand last week, you specified when the task was on (''1'') or off (''0'') for each TR. Here, we are setting those times in seconds, not TRs. 
-</WRAP> 
- 
- 
-**There needs to be a separate timing file for each experimental factor (called //explanatory variables//, or //EVs//, in FSL terminology) used in each run**. These do the same job that your template matrices did last week. So, in our face-scene localizer task, we will need __two stimulus timing files for each run__; one for ''face'' and another for ''scene'' 
- 
-<WRAP center round info 80%> 
-We have used lots of different terms to refer to essential the same thing ... 
- 
-**regressor**  \\ 
-**predictor**  \\ 
-**explanatory variable (EV)**  \\ 
-**template** 
- 
-These all refer to the timing of the activation we predict to be caused by our task (or by nuisance regressors such as head motion).  
- 
-The term **model** can refer to a single EV, but is often used to refer to the collection of //all// of our EVs. 
-</WRAP> 
- 
- 
-The timing file below is for the ''face'' condition in the __Face Task__ //and// for the ''right hand'' condition in the __Motor Task__ (these tasks used the same timing and so we can use the same timing file). 
- 
-<code text> 
-30 12 1 
-78 12 1 
-126 12 1 
-174 12 1 
-222 12 1 
-270 12 1 
-</code> 
- 
-  * The three columns represent **onset time**, **duration**, and **weighting**. 
-    * The onset time is...well...the time the stimulus started.  
-    * The duration is how long the stimulus was on for (pretty straightforward!) 
-    * The weighing column is for when we might want to scale our predictors. For example, let's say I play two tones: A and B. And I can play each one at two different volumes: low and high. In that case I might want to set the weighting of the low volume = 1 and for the high volume = 2.  
-      * For this lab, we will always have the weightings set to 1. 
-  * Each row represents the start of a new stimulus block (either face perception or right-hand finger tapping) 
-  * So the first row specifies that our stimulus started at 30 seconds and lasted for 12 seconds, and that we will have a simple weighing of 1. The second block of faces (or right hand tapping) began at 78 seconds and lasted for for 12 seconds. 
-    * There were six blocks of faces in this first data run. 
- 
-The timing file below is for the ''house'' condition in the __Face Task__ //and// for the ''left hand'' condition in the __Motor Task__ (these tasks used the same timing and so we can use the same timing file). 
- 
-<code text> 
-6 12 1 
-54 12 1 
-102 12 1 
-150 12 1 
-198 12 1 
-246 12 1 
-</code> 
- 
-Note that the scene and face blocks alternate with a 12 second blank period between the end of one block and the start of the next block. For example, the first ''scene'' block begins at 6 s and runs until 18 s (12 s duration), but the first ''face'' block does not start until (30 s). In our last lab we used ''0''s to indicate these timepoints, but here we only indicate the times for the time periods during which we think we can explain the variance, while all others are assumed to be ''0s'' (i.e., unexplained). 
- 
-<WRAP center round info 90%> 
-Normally, good experimental design would require you to change the stimulus timing for each run to avoid //order effects//. In setting up these experiments, we purposively did **not** change the stimulus timing, but rather used the identical timing in each run. This was done to simplify these demonstration analyses.  
- 
-__You can therefore use the same timing files for faces and scenes for each run__, instead of needing to create a unique file for each condition in each run.  
-</WRAP> 
- 
-**1.** Create your task timing for each explanatory variable (i.e., one for face and one for scene //or// one for right-hand and one for left-hand) in a separate text file. 
-  * Use ''BBEdit'' to  create these text files. 
-    * [[:psyc410_s25:sci_prog#editing_shell_scripts|Remember]], you never want to use a word processor for things like this! 
-  * Save the timing in files named: 
-    * **face_timing.txt** and **scene_timing.txt** for the //Face task// 
-    * **right_timing.txt** and **left_timing.txt** for the //Motor task// 
-  * Save these timing files in ''~/Desktop/output/lab08''. 
- 
-<WRAP center round alert 70%> 
-For the rest of the lab I will only refer to the ''face'' and ''scene'' conditions. If you've been assigned the //Motor task// then replace ''face'' and ''scene'' with ''right'' and ''left''. 
-</WRAP> 
- 
- 
- 
- 
-==== Full Model - EVs ==== 
-In setting up our statistical regression model, we wish to create 'templates' of the expected activation for the face blocks and for the scene blocks, separately. This is similar to what you did last week 'by hand'. Now, however, you will use the timing file as input and the FEAT program will generate the template based upon that timing. It will also **convolve** your expected activation template with a hemodynamic response function (HRF) so that the expected activation template has a shape similar to that expected in a real physiological response. 
- 
-**2.** To generate your model, begin by clicking on the ''Full Model Setup'' button. A small window will appear with a tabbed interface.  
- 
-{{ psyc410:images:feat_level1_glm.png }} 
-\\ 
- 
-  * We have two EVs - faces and scenes - so choose ''2'' in the **Number of Original EVs**. 
-  * For the first EV (tab 1), give it the name ''Face''. 
-  * For **Basic Shape**, choose ''Custom (3 Column format)''. 
-  * For **Filename**, specify the ''face_timing.txt'' file (or whatever you named the stimulus timing file). 
-  * For **Convolution**, choose ''Gamma''. 
-    * The **Phase**, **Stddev**, and **Mean lag** of the HRF will be set to ''0'', ''3'', and ''6'', respectively. These values affect the shape and delay of the expected hemodynamic response. 
-  * **Uncheck** the ''Apply temporal derivative'' option. 
-  * **Check** the ''Apply temporal filtering'' option. 
-    * This will apply the same filter to your expected activation template as you applied to your raw fMRI data on the earlier tab. 
- 
-Repeat this process for Tab ''2'', except provide the name ''Scene'' and specify the ''scene_timing.txt'' stimulus timing file. All other options should be the same. 
- 
-**3.** Now choose ''Contrasts & F-tests''. This section presupposes some knowledge on the user's part about specifying statistical tests. You can read about this in detail [[https://fsl.fmrib.ox.ac.uk/fsl/docs/#/task_fmri/feat/user_guide?id=contrasts|here]]. 
- 
-We asking our model to create the following four contrasts: 
-  - The BOLD response to faces greater than Baseline. 
-  - The BOLD response to scenes greater than Baseline. 
-  - The Face response greater than the Scene response. 
-  - The Scene response greater than the FACE response. 
- 
-^  Title  ^     EV1  ^  EV2  ^ 
-|Face      |  1  |    | 
-|Scene      0  |    | 
-|Face > Scene|  1  |  -1  | 
-|Scene > Face|  -1  |    | 
- 
-{{ psyc410:images:feat_stats03.png }} 
- 
- 
-When you are done, click the ''Done'' button. A window will popup showing you your design. By convention, your two expected activation templates will be shown vertically, rather than horizontally. It should look similar to the one shown below. 
- 
-\\ 
-{{ psyc410:images:feat03.png }} 
-\\ 
- 
- 
-<WRAP center round alert 60%> 
-<WRAP centeralign>**__Do not select ''Go'' yet__**.</WRAP> 
-<WRAP centeralign>You will continue setting up the analysis in the next section.</WRAP> 
-</WRAP> 
- 
-===== LAB REPORT Part 2 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 2 
-</typo> 
-</WRAP></WRAP> 
- 
-  - Include a figure showing your model design. 
-    - You can either take a screenshot for find this image in your output directory named design.png 
-  - Why do we convolve our model with an approximate hemodynamic response function? 
-    - What benefit does this have over simply time-shifting our box-car model (as you did in the last lab)? 
-</WRAP> 
- 
- 
-====== Part 3: Post-statistics significance testing ====== 
- 
-There are several methods offered by FSL/FEAT for testing the significance of the statistical model. Students should be aware that, in FSL, the higher level analyses use the full range of statistics and variances from the lower level analyses. That is, FSL does not 'pass up' thresholded statistics to the next level of analysis. However, when you decide to review the significance of your model, at any level, you may likely want to correct for the number of statistical comparisons you performed. 
- 
-/* 
-You may wish to consult the FSL FEAT documentation for the Post-stats tab [[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide#Post-Stats:_Contrasts.2C_Thresholding.2C_Rendering|here]]. 
-*/ 
- 
-<WRAP center round box 90%> 
-<WRAP centeralign>**Multiple Comparisons**</WRAP> 
-The multiple comparisons problem was described in the [[psyc410_s25:fmri_part1#part_6a_preview_of_a_problem_-_multiple_comparisons|last lab]] and in class. It is important that you understand this problem as it comes up frequently in imaging research (where there can be tens of thousands of voxels, and each is treated as a dependent variable). It also comes up in many other areas of research, such as genetics, where many thousands of gene variations are regressed against thousands of phenotypes). 
-  * Here is a [[http://www.biostathandbook.com/multiplecomparisons.html|link]] to a brief discussion of the multiple comparison problem. 
-  * [[http://en.wikipedia.org/wiki/Multiple_comparisons|Wikipedia]] also has a very nice general discussion of the multiple comparisons problem, and ends with links to additional helpful articles. 
-  * This [[https://andysbrainbook.readthedocs.io/en/latest/fMRI_Short_Course/fMRI_Appendices/Appendix_A_ClusterCorrection.html#the-problem-of-multiple-comparisons|link]] points to a discussion of various methods for correcting for multiple comparisons frequently used in fMRI research. 
-  * This [[http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesRandomFields#head-504893e8afe62f1e3e8aaf3cb368a1d389261ef5|link]] points to a tutorial on using Gaussian Random Field Theory as an alternative to Bonferroni correction in smooth images. 
-</WRAP> 
- 
-We will set our method for correcting for multiple comparisons in the ''Thresholding'' section of the ''Post-stats'' tab. It is nearly impossible to publish results that have not been corrected for multiple comparisons. However, the very fact that there are choices in the method applied suggests that the field is not in agreement upon how to do this. The post-stat tab offers three choices for post-stats: 
-  - ''None'' will show the statistical value (z-value) for each and every voxel. 
-  - ''Uncorrected'' will only show voxels above a specified z-value, but will not correct for multiple comparisons.  
-  - ''Voxel'' will correct for multiple comparisons using [[https://matthew-brett.github.io/teaching/random_fields.html|Gaussian Random Field Theory]]. 
-  - ''Cluster'' will correct for multiple comparisons based on the number of contiguous "activated" voxels in a cluster. 
- 
-{{ :psyc410:images:feat_post_stats.png?nolink&400 |}} 
- 
-For your first level analyses, it doesn't really matter what correction you apply (or, even no correction) because you will be combining the results of your two runs into a second level analysis. As I've mentioned elsewhere on this wiki page, the full statistics from lower level analysis are passed upward for higher level analyses. When you run the second level analysis, you'll compare the different corrections for multiple comparisons. 
- 
-For our first-level analysis let's be very liberal and set **Thresholding** to ''Uncorrected'' with a P threshold of ''0.05''. 
- 
-FSL also includes other tests that are not yet accessible through the FEAT GUI, but can be applied through the command line. One relatively new correction for multiple comparisons available through FSL's command line is the //False Discovery Rate (FDR)//. Tom Nichol's has an excellent web site that discusses and demonstrates the FDR procedure for imaging data. His website includes a slide presentation that can be found [[https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/software/fdr/#Slides|here]]. 
- 
- 
-===== LAB REPORT Part 3 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 3 
-</typo> 
-</WRAP></WRAP> 
- 
-  - There are no questions for this part. 
-</WRAP> 
- 
- 
-====== Part 4: Running the First Level Analysis ====== 
- 
-===== Run the first functional run ===== 
-**1.** Once you have entered all of the required information into the FEAT GUI, you should <wrap em>save the file you have created to disk</wrap>. 
-  * Select the ''Save'' button on the bottom of the ''Data'' tab. 
-  * Make sure to give it a name you'll remember (e.g., ''SUBJ_run01_preproc''). 
-  * FSL will append an ''.fsf'' file extension to your designated name. 
- 
-**2.** And now for the moment of truth. Pretty exciting, right!? 
-  * Click on the **''Go''** button. 
-    * If you have entered everything correctly, FSL will start processing according to the parameters you entered. 
- 
-<WRAP center round info 90%> 
-FSL creates an HTML file (''report.html'') in your designated output directory. The processing progress will be written into this HTML file. Normally this ''report.html'' will automatically open in a web browser and you can watch the progress in real-time. 
- 
-<wrap em>It will take some time to complete processing (10-12 minutes)</wrap>. While running, the words ''Still Running'' appear in red font on your HTML log page. 
-</WRAP> 
- 
-===== Run the second functional run ===== 
- 
-While FSL is working (assuming that it is running correctly) on run 01, you can start setting up your First Level Analysis for the second and third functional runs for your participant. 
- 
-**1.** Set up FEAT for run02 
-  - In the ''Data'' tab select the input file for run02 (via ''Select 4D data'') 
-  - Set your ''output directory'' 
-    - (of course this should now be ''run02'') 
-  - Save your setup file for run02 as you did for run01 
- 
-All other parameters are still set correctly from ''run01'', so you don't' have to change anything on any of the other tabs. 
- 
-**2.** Press **''Go''** 
-  * You can do this even if your first analysis is still running, though it might slow down your computer quite a bit. 
- 
-===== Run the third functional run ===== 
- 
-If you are analyzing data from the ''Face task'', your subject will have a third run of data. Go ahead and repeat the steps above to run First Level Analsyis for run03. Subjects in the ''Motor Task'' only have two runs of data. 
- 
-<WRAP center round tip 90%> 
-While you wait for your two (or three) runs of data to finish processing, you should read on through the wiki to preview what steps are coming up next. 
-</WRAP> 
- 
- 
-===== Run analysis on data that has not been preprocessed ===== 
-The last first-level analysis we will run tonight will be on data that does not first get preprocessed. In your open ''FEAT'' window make the following changes: 
-  * Data tab 
-    * select the run01 data for your task 
-    * set the output directory to ''/Users/hnl/Desktop/output/lab07/run1_nopreproc'' 
-  * Pre-stats tab 
-    * Set Motion correction to ''None'' 
-    * Set Slice timing correction to ''None'' 
-    * Set spatial smoothing to ''0.0'' 
-    * For temporal filtering, unselect ''Highpass'' 
-  * Stats tab 
-    * On the EVs 1 tab, unselect ''Apply temporal filtering'' 
-    * Do the same for the EV 2 tab 
-  * Everything else can stay the same 
-  * **''Go''** 
-==== Check for Errors on your Report Log HTML Page ==== 
-<WRAP center round info 90%> 
-As long your HTML log file is not printing pages of error messages, you should be fine. However, if you do observe error messages in your HTML log, read them carefully. Errors will almost certainly be due to an incorrect specification of an input file (wrong name, wrong path, etc.), or because you are trying to write output to a "read only" folder. Check carefully and try to solve these errors on your own before calling me to help. Solving mundane technical problems is a bigger part of science than we care to admit! 
- 
-{{ :psyc410:images:feat_report_logs.png?nolink&500 |}} 
-</WRAP> 
- 
- 
-===== LAB REPORT Part 4 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 4 
-</typo> 
-</WRAP></WRAP> 
- 
-  - There are no questions for this part. 
-</WRAP> 
- 
-====== Part 5: Exploring your results ====== 
- 
-===== FEAT Output ===== 
-FEAT will create a directory with whatever output name you specified and the suffix ''.feat''. So if you specified ''~/Desktop/output/lab07/run01'' as your output directory, FEAT will create the directory ''~/Desktop/output/lab07/run01.feat''. 
- 
-Inside this directory there will be a file named ''results.html'' that will contain a log of activity and results. This file will be displayed as a web page in your internet browser. 
- 
-The directory will also contain lots of other files and subdirectories. A full list can be found [[https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide#FEAT_Output|here]]. For tonight's exercise we will simply look at the ''results.html'' file, but next week we will learn how to investigate the output interactively with ''FSLeyes''. 
- 
-===== Review the Results on the HTML Page ===== 
- 
-<WRAP center round important 60%> 
-For this section look at the output for any of your first-level analysis //except// for ''run01_noprepoc''. We'll look at that one later. 
-</WRAP> 
- 
- 
-You can begin to review your results as soon as the HTML output indicates that the first level analysis is complete (note, while it is running, the words “Still Running” appear in red font on your HTML log page). 
- 
-There is a lot of information in the HTML file. To get you jump-started, click on the ''Post_stats'' hyperlink on your HTML output and scroll down to see the output for the four contrasts that you specified. 
- 
-  * ''zstat1 - C1(Face)'' 
-    * The first array of brain slices shows the results of your first contrast (either ''face'' or ''right hand'', depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block). 
-      * In other words, this shows all the voxels in which the condition predicted the brain activity at p < .05. Importantly, this does not mean the voxel was //selectively// activated by faces (or right-hand) because, for example, maybe a voxel is just responding to visual stimulations vs no visual stimulation. 
-  * ''zstat2 - C2(Scene)'' 
-    * The second array of brain slices shows the results of your second contrast (either ''scene'' or ''left hand'', depending on your experiment) compared to all non-modeled time points (i.e., the short rest periods of 12 secs between each block). 
-      * In other words, this shows all the voxels in which the condition predicted the brain activity at p < .05. Importantly, this does not mean the voxel was //selectively// activated by scenes (or left-hand) because, for example, maybe a voxel is just responding to visual stimulations vs no visual stimulation. 
- 
-<WRAP center round info 80%> 
-Note that you can have the same voxels activated in both of these first two contrasts. For example, you might expect that visual cortex is activated by both faces and scenes, and so those visual cortex voxels should be activated in both of the first two contrasts. 
-</WRAP>   
-  * ''zstat3 - C3(Face > Scene)'' shows voxels that are activated **more** by ''faces'' than by ''scenes''. 
-    * We've subtracted out all of the activation that was in common between ''faces'' and ''scenes'' (e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to faces. 
-  * ''zstat4 - C4(Scene > Face)'' shows voxels that are activated more by ''scenes'' than by ''faces''. 
-    * We've subtracted out all of the activation that was in common between ''faces'' and ''scenes'' (e.g., early visual cortex) and can therefore infer that these voxels are particularly sensitive to scenes. 
-     
-<WRAP center round info 80%> 
-Unlike the first two contrasts, there should be no voxels will be in common between these latter two contrasts. 
-</WRAP>  
-     
-{{ :psyc410:images:feat_report_post-stats.png?nolink&500 |}} 
- 
- 
-<WRAP center round info 90%> 
-It will be more informative to fully investigate your activation results after you complete the Second Level Analysis. So, look at your results here in the HTML output, visually compare the activation patterns in the first and second runs to get a sense of the consistency, and then move on. 
-</WRAP> 
- 
- 
- 
- 
-===== LAB REPORT Part 5 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 5 
-</typo> 
-</WRAP></WRAP> 
- 
-  - Include a figure of your run 1 contrast 3 results taken from your html output file. 
-    - What do these results show?  (I'm looking for a general description of the contrast more than a detailed analysis fo the specific activation pattern. In other words, what question could contrast 3 answer that the other contrasts could not.)</WRAP> 
- 
-</WRAP> 
- 
-====== Part 6:  What did preprocessing do to your input data? ====== 
- 
-Several pre-processing steps were included in the first level analysis. Were they effective? Let's do a direct comparison of our raw data and the preprocessed data using ''FSLeyes'' to see the effects of preprocessing. Some will be obvious, whereas others are more subtle. 
- 
-**1.** Launch ''FSLeyes''. 
- 
-**2.** Load two files: 
-  * your raw data set 
-    * e.g., ''~/Desktop/input/fmri/loc/data/nifti/SUBJ/SUBJ_face_run1.nii.gz'' 
-  * your preprocessed data set 
-    * e.g., ''~/Desktop/output/lab07/run01/filtered_func_data.nii.gz'' 
- 
-<WRAP center round info 70%> 
-Of course, replace ''SUBJ'' and ''face'' with the appropriate subject ID and task. 
-</WRAP> 
- 
-**3.** Open a second viewer to view the files side by side 
-  * ''View'' -> ''Ortho View'' 
-    * In the left window, deselect the ''filtered_func_data.nii.gz'' data by deselecting the blue eye. 
-    * In the right window, deselect the ''SUBJ_face_run1.nii.gz'' data by deselecting the blue eye next to it. 
-    *  So you should have the raw data displayed in the left windows, and your preprocessed data displayed in the right windows: 
- 
-{{ :psyc410:images:dual_ortho.png?800 | }} 
- 
-<WRAP center round tip 80%> 
-Notice that when you click at a location on one of the brains, the cursor will jump to the same location in the other brain. 
-</WRAP> 
- 
-**4.** Display the raw and preprocessed data sets' time-series. 
-  * Press ''command'' + ''3'' 
-  * This will only turn on the time-series from one of the brains. Turn on the other one by selecting the grey eye in the ''Overlay list'' (the list in the ''Time series'' window, not in the ''Ortho View'' windows. 
-  * Select ''Normalised'' from the ''Plotting Mode'' drop down list. 
- 
-**5.** Turn off the modeled time-series.  
-  * You should now see three different waveforms: the raw voxel time-series, the pre-processed time-series, and the fitted model. Let's turn off the fitted model. To do so: 
-    * Highlight the ''filtered_func_data'' in the ''Overlay list'' 
-    * Select the wrench icon in the ''Time Series'' window. 
-    * Unselect ''Plot full model fit'' from the ''FEAT settings''  
-        * In the image below, the ''Plot full model fit'' has not yet been unchecked. 
- 
-{{  :psyc410:images:fsleyes_plotfullmodelfit.png?400  |}} 
- 
- 
-**6.** Compare the raw time-series with the preprocessed time-series. You might want to try this at a few different voxels. 
-  * Can you see differences in the time-series? 
-  * The easiest difference to identify 'by eye' is the impact of the highpass filter. 
-  * We'll have another look at the effect of preprocessing in the next section. 
- 
-===== LAB REPORT Part 6 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 6 
-</typo> 
-</WRAP></WRAP> 
- 
-  - There are no questions for this part. 
- 
-</WRAP> 
- 
- 
- 
-====== Part 7: How well did your model account for you activation results? ====== 
- 
-As discussed in the lecture, you should examine your residuals to see how well your model accounts for the time-course of your activations.  The residual of a regression is considered error variance, and the Least Squares approach used in regression analysis seeks to minimize the error variance. 
- 
-<WRAP center round tip 90%> 
-You want the ratio of explained variance to unexplained variance to be as large as possible. The residuals represent the unexplained variance and therefore you want them as small as possible. 
-</WRAP> 
- 
-**1.** Load the following files into ''fsleyes'': 
-  * ''run01.feat/filtered_func_data.nii.gz'' 
-  * ''run01.feat/thresh_zstat1.nii.gz'' 
-    * Change the color from ''Greyscale'' to ''Red-Yellow'' 
-    * You should now see "active" voxels overlaid on the brain. 
- 
-**2.** Click on a strongly activated voxel (colored yellow). 
- 
-**3.** View the time-series 
-  * Press ''command'' + ''3'' 
-  * Highlight ''filtered_func_data'' 
-  * Click on the wrench icon 
-  * In the ''FEAT'' options... 
-    * Select ''Plot residuals'' 
-    * Unselect everything else 
- 
-**3.** Look at the residual for that voxel and judge whether the activation waveshape is largely absent. If it is, it means that your model successfully accounted for the activation, and there is no task-related activation left in the residual error term. 
- 
-**4.** Load the data that was analyzed with preprocessing. 
-  * Add ''run01_nopreproc/filtered_func_data.nii.gz'' to your display. 
-  * In the Time series window, change the settings to only show the residuals (as you just did above) 
- 
-<WRAP center round tip 80%> 
-It will be best to view these residual timeseries with the ''Plotting mode'' set to ''Normal'' or ''Demeaned''. 
-</WRAP> 
- 
-**5.** You should now see the residuals from each of the two data sets. Remember, these are identical data sets that were analyzed with the exact same model. The only difference was that one was preprocessed and the other was not. Do you observe larger residuals (i.e., more unexplained variance) for the data that was not preprocessed? 
- 
- 
-===== LAB REPORT Part 7 ===== 
-<WRAP center round important 100%> 
-<WRAP centeralign> 
-<WRAP centeralign> 
-<typo fs:x-large; fc:purple; fw:bold; text-shadow: 2px 2px 2px #ffffff> 
-LAB REPORT Part 7 
-</typo> 
-</WRAP></WRAP> 
- 
-  - Include a figure showing a residuals time-series comparison from a voxel that you think clearly highlights the benefit(s) of preprocessing. That is a voxel in which you believe the residuals after preprocessing are smaller than the residuals without preprocessing. 
-    - Specifically, what do you observe that motivated you to choose this voxel? 
- 
-</WRAP> 
- 
- 
- 
  
psyc410_s2x/fmri_part2.1743953025.txt.gz · Last modified: 2025/04/06 10:23 by admin

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
CC Attribution-Share Alike 4.0 International Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki