Neuroimaging  ·  Computational Methods  ·  Brain Health

VaidehiPatel

Computational Neuroscientist & Research Engineer

Pipelines, statistical frameworks, multi-scale analysis — built to answer questions that don't have off-the-shelf methods. Engineer first. Neuroscientist by choice.

150+
Thesis downloads
across 10 countries
139
Subjects — TBI &
controls (manuscript)
20K+
Cortical points
per hemisphere
FreeSurfer throughput
via parallel batching
Scroll to explore
Background
An engineer who had no choice but to take the question seriously.

Computer engineer turned neuroscientist. I discovered my passion for neuroscience while working at a medical device company during COVID-19 — following a spark of curiosity left by a traumatic brain injury. To transform one of the worst things that happened to me into one of the best things that happened for me.

That decision led me to one of the only MS programs in Cognitive Neuroscience in the United States — and to mapping what happens to the brain's neurovascular system after moderate-to-severe TBI.

My MS thesis produced one of the first vertex-based CBF-fALFF neurovascular coupling analyses in TBI — 20,484 cortical points per person, 63 subjects (29 TBI + 34 controls), three timepoints, two spatial scales: global (whole-brain and hemisphere-level) and neighbourhood vertex-wise coupling. Submitted January 2025. 150+ downloads across 10 countries before journal publication. Presented at RCMI 2025.

As Research Associate I expanded this work substantially across three fronts. First, extended the preprocessing pipeline — identifying that standard motion thresholds were insufficient for TBI populations and implementing validated TBI-appropriate scrubbing parameters, with stricter lesion masking. Second, built two categories of automation: parallel processing scripts running multiple subjects simultaneously through FreeSurfer, and end-to-end analysis scripts that made the manuscript-scale work tractable. The thesis computed global coupling for 2 values per subject — LH and RH — manually in Excel: coupling columns, T-tests, corrected p-values, box plots, one subject at a time. The manuscript extends this to five simultaneous spatial scales across 139 subjects: 68 DKT atlas regions × 2 hemispheres, 7 Yeo networks × 2 hemispheres, neighbourhood vertex-wise, whole-brain, and hemisphere-level. At that scale FDR correction must run across all regions simultaneously, and every figure regenerates automatically when the data changes — neither is possible to do correctly by hand. Third, documented the full pipeline so a neuroimaging novice could replicate start-to-end from raw data. First-author manuscript in preparation.

TBI is where I built my proof of work. It is not the boundary of what I can do or think in. I am drawn to any hard problem at the intersection of how the brain works and how we build systems to understand or augment it — neuroimaging, BCI, human factors, computational methods applied to brain health. The engineering and scientific curiosity transfers and expands.

Disciplines were divided so it's easier to go deep — not that they're separate when seen from a higher dimension.

DegreeM.S. Cognitive Neuroscience, CUNY
PriorB.E. Computer Engineering, Mumbai
LabNeuroimaging Lab, CCNY
FocusTBI · NVC · Neuroimaging Pipelines
StatusAvailable Immediately
LocationNew York, NY · Open to Relocation
Work
Research that builds its own foundation.
Multi-Scale Neurovascular Coupling in Moderate-to-Severe TBI
Journal Manuscript — In Preparation

An expanded first-author study investigating CBF-fALFF neurovascular coupling disruption across the first year post-injury in moderate-to-severe TBI. Applies five simultaneous analytical frameworks — whole-brain global, hemisphere-level, Yeo-7 functional networks, 68-region DKT atlas, and neighborhood vertex-wise — to characterize disruption patterns and their relationship to injury severity (PTA), neuropsychological outcomes, and lesion volume. Built on my MS thesis, expanded in scope and depth through ongoing Research Assistant work.

Subjects: 139 (TBI + controls)
Timepoints: 3, 6 & 12 months post-injury
Spatial scales: 5 simultaneous
Status: Manuscript in preparation
MS Thesis: Vertex-Based CBF-fALFF Coupling in TBI — First Study at This Resolution
150+ Downloads · 10 Countries

One of the first studies to examine neurovascular coupling in moderate-to-severe TBI at vertex-level spatial resolution. Computed CBF-fALFF coupling at 20,484 cortical surface points per subject across 121 imaging sessions (29 TBI × 3 timepoints + 34 controls). Found significant bilateral coupling disruption at 12 months post-injury and significant negative correlation between coupling and post-traumatic amnesia severity (PTA) at 6 months — supporting NVC disruption as a candidate non-invasive biomarker for TBI severity and recovery monitoring.

Submitted: January 2025
Downloads: 150+ across 10 countries
Key finding: p=0.017 bilateral at 12 months
APA Citation
Patel, V. H. (2025). Vertex-based analysis of cerebral blood flow and fractional amplitude of low-frequency fluctuations (CBF-fALFF) coupling in moderate-to-severe traumatic brain injury during the first year post-injury [Master's thesis, The City University of New York]. CUNY Academic Works. https://academicworks.cuny.edu/gc_etds/6138/
Non-Invasive Study of Neurovascular Coupling in Traumatic Brain Injury
Poster · RCMI 2025

Presented at RCMI 2025. Co-authored with MA Yamin and JJ Kim (CUNY School of Medicine / Graduate Center). Examined CBF-fALFF neurovascular coupling in moderate-to-severe TBI — 29 TBI patients, 34 healthy controls, three timepoints. Found persistent hemisphere-level coupling disruption by 12 months post-injury and significant negative correlation between coupling and injury severity (PTA) at 6 months. Funded by NIH NINDS and NIMHHD.

Authors: Patel, Yamin, Kim
LH: r = −0.506, p = 0.005
RH: r = −0.466, p = 0.011
Funding: NIH NINDS · NIMHHD
APA Citation
Patel, V., Yamin, M. A., & Kim, J. J. (2025). Non-invasive study of neurovascular coupling in traumatic brain injury [Poster presentation]. RCMI 2025, New York, NY.
Poster — RCMI 2025
RCMI 2025 Poster — Non-Invasive Study of Neurovascular Coupling in TBI
Projects
Building systems, not just running analyses.
CareerOS — Multi-Agent Job Search System
Ongoing · V6 · Python + Claude API

Most job-search tools solve the wrong problem. They automate the application — bulk-apply, auto-fill, spray and pray. The actual bottleneck is upstream: figuring out which roles are worth your time, whether you're eligible to take them, what makes you the right person for this specific company's problem, and what you should be building toward that these roles are practice for. This system addresses all four. The operational layer (find, filter, comply, score, draft) handles execution. The strategic layer answers a harder question: given who you actually are, what work would be genuinely hard to replace you in?

The distinction the system is built around
Type 1 — Execution Running search strings. Deduplicating. Checking E-Verify language. Formatting JSON. Generating cover letter drafts. High volume, clearly defined, automatable. The agents handle this.
Type 2 — Judgment Is this company's problem something I actually want to solve? Is this role a step forward or lateral? Which of the three validation project options would actually impress this hiring team? Does this fit score reflect a real match or a keyword coincidence? The system surfaces these calls. I make them.
Type 3 — Strategic What combination of my background is genuinely hard to replicate? Which tasks in any role I take are automatable within two years? What does a defensible career position look like at the intersection of neuroscience, engineering, and clinical operations? The strategy agent works on this continuously.
Operational layer — what runs daily
Agent 1 — Title Dictionary
Builds 30+ role variants before searching — a CRC at MGH, an R&D associate at a neurotech startup, and a clinical operations lead at a CRO are often the same role. Seeds all downstream queries. Updates weekly from what real postings surface.

Agents 2A–2E — Scout Layer
Five parallel scouts: ATS boards directly (Greenhouse, Lever, Ashby, Workable), aggregators, pre-posting hiring signals via LinkedIn Google X-ray, Seed–Series B startups on Wellfound, and a curated roster of 40+ careers pages. Dual-filter: qualifies on title match OR ≥2 skill anchors in JD text (Python, FreeSurfer, IRB, fMRI, clinical research, longitudinal, etc.).

Agents 3A–3C — Filter + Intel Layer
3A scores fit 1–10, every score cites the specific JD language it's based on — not vibes, not keyword overlap. 3B checks STEM OPT compliance conservatively: W-2 vs. contractor, E-Verify signals, sponsorship language, timeline feasibility against the deadline. 3C builds a dossier: what problem this hire solves, team structure via LinkedIn X-ray and theorg.com, hiring speed estimate, three validation project options specific to their work.

Agent 4 — Deduplication
Collapses the same role appearing across five ATS platforms into one before it reaches the apply queue.

Agents 5–8 — Action Layer
Resume customizer rewrites emphasis and mirrors JD language — fabrication guard in every prompt, no invented skills. Cover letter is 200 words, specific to this company's challenge, not a template. Networking agent generates three tone variants for outreach under 150 words each, each referencing something real from their work. Validation agent designs an analysis or deck using public data to demonstrate exactly what the JD requires.

Agent 9 + tracker.py
Agent 9 formats everything into structured JSON. tracker.py reads it, assigns Job IDs, appends rows to a multi-sheet Excel tracker, color-codes by STEM OPT verdict and fit score, creates clickable Apply links, flags duplicates, backs up automatically before each run.

Agents 10 & 12 — Refinement Layer
Daily briefing: quota met, follow-up queue (applications 7+ days old), pattern analysis after 20+ applications, deadline escalation if <15 days remain. LinkedIn optimizer rewrites headline and about section for recruiter search, generates 10 post ideas weekly. Self-refinement: title dictionary and scoring weights update from application outcomes.
Strategic layer — what runs underneath
Career Strategy Analyst
Maps current skills to market demand, identifies which tasks in any role are automatable within two years, and surfaces where the combination of engineering + neuroscience + clinical operations creates a position that's genuinely hard to replicate. Output: defensible roles vs. roles where you're competing on credentials alone.

Sample output for this candidate's profile:
❌ Literature review: 85% automatable
❌ Standard data analysis: 90% automatable
✅ Experimental design: 30% automatable
✅ Hypothesis generation: 20% automatable
✅ Translating between cognitive science and AI engineering: not automatable
✅ Evaluating whether AI research is actually valid: not automatable
→ Defensible position: roles where ambiguity is the job, not a side effect


Critical Thinking Partner
Not supportive — challenging. Pressure-tests assumptions in the job search strategy itself. Why is this role worth applying to? What's the actual risk if the STEM OPT deadline passes? What would it mean to take the wrong role at the right company? The value of this agent is that it asks things I might avoid asking myself.

AI Research Validator (in development)
A tool that evaluates AI research claims against actual cognitive science literature. AI researchers regularly misapply neuroscience concepts — "working memory" applied to token context, "theory of mind" applied to pattern matching on ToM outputs. This agent identifies overclaims and suggests accurate reframing. It's Type 3 work: it can only exist if the person running it understands both fields well enough to know what's wrong. That's the differentiator the system is built to surface.

Why this framing matters
The operational layer gets you a job. The strategic layer gets you the right job — and prevents spending six months optimizing an application process toward roles that are themselves automatable. The system is designed to keep both running simultaneously: find the best available opportunity now, while building toward a position that's harder to compete away.
Architecture: 12 agents · 5 layers operational + 3 agents strategic
Stack: Claude API · Python · openpyxl · Google X-ray (no login, no scrapers)
Versions: V4 → V5 → V6 · each documented what failed and why
Status: V6 operational for manual execution · full automation layer in progress
V6 Architecture — Operational Layer
Agent 1Title Dictionary
30+ variants · tiered · seeds all queries
seeds all scout queries
2AATS Boards
Greenhouse · Lever · Ashby · Workable
·
2BAggregators
broad sweep
·
2CHiring SignalsLinkedIn X-ray · pre-posting
·
2DStartup Scout
Wellfound · Seed–Series B
·
2E40+ Target
careers pages
dual-filter: title match OR ≥2 skill anchors
4Dedup
collapse across sources
3BSTEM OPT
W-2 · E-Verify · timeline
3AFit Scorer
1–10 · cites JD language
3CCompany Intel
why hired · team · 3 validation options
Tier A & B advance
5Resume
reframe · no fabrication
·
6Cover Letter
200 words · company-specific
·
7Networking
3 variants · cites their work
·
8Validation
analysis/deck · public data
9 + tracker.pyJSON → Excel · Job IDs · color-coded
·
10Daily Brief
quota · follow-ups · deadline
·
12LinkedIn
headline · posts weekly
·
Self-refinementtitle dict + weights from outcomes
Strategic layer — runs alongside
Career Strategy Analyst automation risk per task · defensible skill combinations · trajectory mapping
Critical Thinking Partner challenges assumptions · finds blind spots · pressure-tests decisions
AI Research Validator evaluates AI claims against cognitive science · flags misapplied concepts · in development

Manual execution — I run each agent, read the output, decide what moves forward · every fit score cites specific JD language · fabrication guard in every resume prompt · judgment stays human throughout

ASL Preprocessing Pipeline & NVC Analysis Scripts
Lab Infrastructure · Production Use

Two tools built for the same research programme. The pipeline extends the standard ASLtbx workflow with a TBI-appropriate motion scrubbing stage — ASL is a low-SNR sequence where standard motion thresholds designed for healthy adults are insufficient for TBI populations who move more during scanning. Identifying this gap, adapting the thresholds to the population, and inserting a validated scrubbing stage between PART1 and PART2 was a methodological contribution, not a toolbox fix. The analysis scripts make the full scale tractable — the thesis computed global coupling for 2 values per subject in Excel: coupling columns, manual T-tests, corrected p-values, box plots. The manuscript extends this to five simultaneous spatial scales across 139 subjects: 68 DKT regions × 2 hemispheres, 7 Yeo networks × 2 hemispheres, vertex-wise, whole-brain, and hemisphere-level. At that scale, FDR correction has to run across all regions simultaneously — Excel can't do that correctly — and every figure would need to be remade by hand each time anything changed. A single script call now produces what would have required rebuilding that spreadsheet hundreds of times over.

Pipeline: ASLtbx · SPM12 · MATLAB · Python
Analysis: R · Python · FreeSurfer
Subjects QC'd: 130+ sessions
Documentation: Full pipeline doc ↗
What ASL measures · why it's hard · what TBI adds
CBF via ASL
Arterial spin labelling tags water in blood magnetically, lets it flow into brain tissue, subtracts labelled from control to measure cerebral blood flow. No contrast agent. But SNR is low — the signal is ~1% of background. Every source of noise matters.
Why motion is worse here
fMRI scrubs individual corrupted frames. ASL can't — CBF = control − label as a pair. Remove one frame, you corrupt the subtraction. Remove neither, the outlier CBF value contaminates everything downstream, including the NVC coupling calculation it feeds into.
TBI compounds this
Moderate-to-severe TBI patients move more during scanning. Standard motion thresholds designed for healthy adults would exclude too much data. The pipeline needed parameters that account for the population — not just copy defaults from another modality.
ASL Preprocessing — What Each Stage Does and Why It's There
1
DICOM → NIfTI · Session Mapping
Raw scanner files converted to neuroimaging format · 90 volumes acquired (45 label + 45 control, interleaved) · subject IDs mapped across naming conventions (scanner ID ≠ study ID) · directory structure built exactly as ASLtbx expects — wrong structure = silent failure at CBF step
2
Motion Realignment — must happen before scrubbing
SPM two-pass: align all 90 volumes to a stable reference mean · generates motion parameters (translation + rotation per volume) · scrubbing happens after this, not before — you can only identify which frames moved too much once you know how much they moved, and that measurement is only accurate after alignment
3
TBI Motion Scrubbing Inserted — not in standard ASLtbx
Standard ASLtbx moves directly PART1 → PART2 using motion thresholds designed for healthy adults. This stage was inserted between them with population-appropriate thresholds derived from TBI-specific literature. Volumes exceeding these thresholds are removed as label-control pairs — never individually, because CBF = control − label; removing one frame corrupts the subtraction. The mean image is recomputed from clean volumes only before coregistration, because a motion-blurred mean shifts the anatomical reference and causes CBF to be systematically underestimated. Validated: sub86 CBF improved from 38.5 → 52.5 mL/100g/min after applying population-appropriate thresholds.
4
Coregistration · Smoothing · Brain Masking
T1 structural scan aligned to clean ASL mean (functional → structural alignment) · Gaussian smoothing improves ASL SNR before subtraction · brain mask excludes non-brain signal that would otherwise contaminate whole-brain CBF estimates
5
CBF Quantification · MNI Normalisation
Label-control subtraction using protocol-specific acquisition parameters · T1 tissue segmentation (grey matter / white matter / CSF) for partial volume correction · normalised to MNI standard brain space using lab-specific bounding box — the toolbox default silently clips superior and inferior regions of the brain
Output per subject: wmeanCBF (MNI-space) · cmeanCBF (outlier-cleaned) · globalsg.txt (whole-brain CBF) — feeds directly into NVC analysis
What the analysis is actually measuring · why five scales · what the clinical variables mean
Neurovascular coupling
The brain's ability to match blood flow to neural activity — measured here as the correlation between CBF (blood flow via ASL) and fALFF (neural activity via resting-state fMRI) at each cortical location. In TBI, this coupling is disrupted. The question is: where, how much, and does it track with how badly the person was injured and how well they recover?
Why five spatial scales
A whole-brain average can mask regional disruption. Network-level (Yeo-7) shows which functional systems are affected. Region-level (DKT, 68 ROIs) pinpoints anatomy. Vertex-wise (20,484 points per person) maps the cortex without parcellation assumptions — if disruption follows watershed zones or lesion boundaries rather than functional atlases, only vertex-level analysis can show it.
PTA and lesion masking
PTA = post-traumatic amnesia, the clinical measure of injury severity used here. Lesion masking removes vertices within TBI lesion boundaries before computing coupling — without this, damaged tissue introduces noise that could mask real coupling disruption or manufacture false signal. These aren't cosmetic details; they're what makes the analysis interpretable.
NVC Analysis Scripts — Five Scales, One Run
Scale 1
Whole-Brain
Single CBF-fALFF correlation per subject · group comparisons (TBI vs HC) · longitudinal change across 3, 6, 12 months · correlations with PTA severity
Scale 2
Hemisphere
Left and right cortex separately · bilateral disruption vs asymmetric · same group + longitudinal tests as global · detects lateralised injury effects whole-brain average hides
Scale 3
Yeo-7 Networks
7 functional brain networks (default mode, frontoparietal, visual, etc.) · computed LH / RH / combined · FDR correction across comparisons · identifies which network type is most disrupted
Scale 4
DKT Atlas
68 anatomical regions (cortical parcellation) · LH / RH / combined · lesion masking applied per region · identifies specific anatomical loci — e.g. is the precuneus more affected than frontal cortex?
Scale 5
Vertex-wise
20,484 cortical points per person · coupling computed at each vertex · permutation testing + cluster correction · no parcellation assumptions — the highest resolution picture of where NVC breaks down
139 subjects · 3 timepoints · 5 scales · all in one run
NPC correlation with PTA, neuropsych scores at each scale
All figures reproducible · scripts available on request
Capabilities
The full stack of what I bring.
Neuroimaging
FreeSurfer · fMRIPrep
ASLtbx · CONN Toolbox · SPM12
Surface-based analysis
Cortical parcellation
Programming
Python (NumPy, SciPy, Pandas,
Matplotlib, Nibabel)
R (modeling, doParallel)
MATLAB · Bash
Statistical Methods
Longitudinal analysis
Locally weighted regression
Permutation testing
FDR & cluster-based correction
Mixed-effects models
AI & Automation
Advanced prompt engineering
Multi-agent LLM architecture
Claude / GPT API integration
Pipeline automation design
Research Methods
CITI-certified (HSR + RCR)
IRB compliance
Neuropsychological assessment
Experimental design
QC across 130+ sessions
Domain Knowledge
TBI neurovascular pathophysiology
Medical device operations
Human factors & operator cognition
Translational research thinking
Let's Talk
Available immediately.
STEM OPT eligible. Open to relocation.

Looking for roles in neuroimaging research, neurotechnology, healthcare AI, or academic research labs focused on brain health and recovery.
Research Scientist · Research Associate · Computational Neuroscientist · Data Scientist · Human Factors Scientist