Service Types

Research Design and Analysis (RDA) Services support graduate students and faculty researchers at OISE and beyond by offering two types of services: individual consultations and research software training.

Individual Consultations

You can book one-on-one consultations with our expert in research methodologies, Dr. Olesya Falenchuk. These consultations provide support at any stage of your research project and on any aspect—from formulating research questions and designing your study to data management, analysis, and reporting. To book a consultation, please contact Dr. Olesya Falenchuk at olesya.falenchuk@utoronto.ca.

Workshops

RDA Services offer a range of workshops designed to train researchers in using software for managing and analyzing both quantitative and qualitative data. All workshops are delivered asynchronously online, giving researchers the flexibility to engage with the material at their own pace and support their ongoing skill development.

Each workshop registration includes a complimentary follow-up one-on-one consultation with Dr. Olesya Falenchuk, our research methodology consultant.


Note: Customized workshops can be arranged for research groups of three or more participants and can be delivered either synchronously online or in person. To schedule a customized workshop, please contact Dr. Falenchuk at olesya.falenchuk@utoronto.ca.

Registration

Registration Details

We offer training in the following research software packages:

  • NVivo 14 (Windows and Mac) and Dedoose – for qualitative and mixed-methods data
  • SPSS, Stata, and R – for quantitative data

To register for an RDA workshop, use the provided registration link, which will direct you to our registration system. After registering and receiving confirmation, please email Dr. Olesya Falenchuk at olesya.falenchuk@utoronto.ca to receive instructions and access to the workshop materials.

Workshop Fees

Registrant

Fee

U of T Students$80.00
Non-U of T Students$100.00
U of T Faculty and Staff$100.00
External Faculty/Staff/Affiliates$200.00

Part I – Introduction to NVivo and learning essential tools

NVivo is a software package used for managing and analyzing qualitative data, including transcripts of in-depth interviews, focus groups, field notes, data from online surveys, as well as audio, video, images, and social media content. The purpose of this workshop is to introduce participants to the NVivo platform, its tools, and its capabilities, and to help them develop practical skills for managing and analyzing qualitative data using the software.

Participants will learn how to import and organize their data in NVivo, take research notes during analysis, and apply various coding techniques to their qualitative data. This workshop is suitable for beginner researchers with no prior experience using NVivo, as well as those looking to expand their knowledge and skills with the software.

  • Launch NVivo and create a project
  • Navigate around a project
  • Save and make a project backup
  • Import text data
  • Edit text data
  • Create new files
  • Organize data files into folders
  • Use note-taking tools (annotations, memos, see-also links)        
  • Manually code unstructured text data
  • Restructure code tree
  • Autocode structured data
  • Validate data coding
  • Rename, remove, sort, and merge codes
  • Export code structure and coded content

Part II – Advanced NVivo tools

The purpose of this workshop is to introduce participants to more advanced features of NVivo and to provide hands-on experience with the software. Specifically, participants will learn how to use cases and attributes, organize project items into sets, and perform various types of queries to explore their data and identify patterns and trends in coding. This workshop is suitable for researchers with some basic familiarity with NVivo, as well as those looking to deepen their understanding of its advanced capabilities.

Completion of the NVivo Part I workshop is strongly recommended for those planning to attend this session.

  • Create new cases
  • Code into existing cases
  • Create case classifications and case attributes
  • Import and export classification sheets
  • Create sets
  • Conduct various types of queries:
    • Word Frequency Queries
    • Text Search Queries
    • Coding Queries
    • Matrix Coding Queries
    • Compound Queries
    • Group Queries
  • Save queries and results

Part III – NVivo tools for working with data other than text

This workshop focuses on the management and analysis of images, audio, video, bibliographic data, survey responses, and social media content in NVivo. Participants will learn how to efficiently analyze large volumes of data (such as surveys and social media), transcribe and code media files, and work with bibliographic sources within the NVivo environment. The workshop will also introduce NVivo’s data visualization tools and demonstrate how they can be used to support data interpretation and reporting. This session is suitable for researchers with limited experience using NVivo, as well as those looking to expand their skills and explore its more advanced features.

Completion of the NVivo Part I workshop is strongly recommended for those planning to attend.

  • Import various types of data into NVivo (images, audio, video, surveys, social media, bibliography)
  • Transcribe audio and video files
  • Code media transcripts and media files
  • Download social media content from the Web with NCapture
  • Use queries for efficient analysis of survey data
  • Create and modify visualization tools in NVivo
    • Word clouds
    • Charts
    • Hierarchy charts
    • Mind, concept, and project maps
    • Diagrams

Dedoose

Dedoose is a cloud-based platform for analyzing qualitative and mixed-methods data, including transcripts of in-depth interviews, focus groups, field notes, survey responses, and audio and video content. Its web-based design enables real-time collaboration among researchers using different operating systems (PC and Mac). The purpose of this workshop is to introduce participants to the Dedoose platform and its key features, highlight strategies for efficient use and teamwork, and provide hands-on experience in organizing and analyzing qualitative data.

Participants will learn how to import various types of data into Dedoose, create or import descriptor sets and fields, build and modify code trees, revise and validate excerpts, create and use memos, and explore patterns and trends in their findings using a range of analytic tools. This workshop is suitable for beginner researchers with no prior experience using Dedoose, as well as those looking to deepen their knowledge and enhance their skills with the platform.

  • Create a project in Dedoose
  • Import various types of data (text, PDFs, images, audio, video, surveys)
  • Create or import descriptor sets and descriptor fields
  • Assign descriptor fields to data files
  • Create and modify code tree
  • Manually code data
  • Create and organize memos
  • Explore project data and findings with analytic tools (quantitative, qualitative, and mixed-methods)

Part I – Introduction and data preparation 

This workshop has a two-fold focus: (1) to introduce researchers to the software interface, its key features, and best practices for effective use; and (2) to guide them through the process of preparing quantitative data for analysis while building practical skills for essential data management tasks.

Participants will learn how to use a data dictionary, perform basic data quality checks, merge and aggregate data files, carry out data transformations, and compute new variables.

This workshop is ideal for novice users of the software, as well as researchers seeking to enhance their efficiency and effectiveness in using its features.

  • Navigate the software interface
  • Learn and adopt best practices for documenting data analytic work
  • Import data files
  • Check data for
  • Duplicate cases
  • Unusual cases
  • Merge data files by adding cases
  • Merge data files by adding variables
  • Aggregate data files
  • Recode continuous and categorical variables
  • Compute new variables
  • Filter or split data for analyses

Part II – Data exploration tools

This workshop will guide participants through the fundamentals of data exploration as a critical first step in data analysis. It will help them build skills in running and interpreting univariate and bivariate descriptive statistics, as well as using data visualization tools. The selection of appropriate exploratory data analysis techniques for different types of data will also be discussed.

This workshop is designed for novice users of the software, as well as researchers looking to improve their efficiency and effectiveness in using its features. Completion of the Part I workshop is strongly recommended before attending Part II.

  • Choose univariate exploratory data analysis tools depending on the measurement scales of the variables
  • Run and interpret
    • Frequency tables
    • Bar graphs
    • Histograms
    • Boxplots
    • Measures of central tendency and variability
  • Choose bivariate exploratory data analysis tools depending on the measurement scales of the variables
    • Crosstabs
    • Clustered bar graphs
    • Scatterplots
    • Various types of correlation coefficients (Phi and Cramer’s V, Tau B, Point biserial, Spearman’s rho, Pearson’s r)

Part III – Statistical methods for group comparisons 

This workshop focuses on inferential statistical techniques for comparing group means, presented within the framework of General Linear Modeling (GLM). It will cover data requirements and assumptions for each method, and demonstrate a variety of models for analyzing both independent groups and repeated-measures samples. Specifically, the workshop includes instructions for performing independent and paired-samples t-tests, as well as one-way, factorial, repeated-measures, and mixed ANOVA, and ANCOVA. Emphasis will be placed on the practical application of GLMs using statistical software. Participants are expected to have a working knowledge of the software and a solid foundation in intermediate statistics.

  • Choose a statistical technique from GLM family to address a research question of interest
  • Understand and check model assumptions
  • Run and interpret the results of the following statistical analyses for comparing the means of independent groups
    • Independent-samples t-test
    • One-way ANOVA
    • Factorial ANOVA
    • ANCOVA
  • Run and interpret the results of the following statistical analyses for comparing the means of repeated measures
    • Paired-samples t-test
    • Repeated-measures ANOVA
    • Mixed ANOVA 

Part IV – Linear regression analysis 

This workshop will explore key aspects of multiple regression analysis, including approaches to model building, the use of dummy variables for categorical predictors, examination of interaction effects, assessment of model assumptions, regression diagnostics and residual analysis, detection and management of collinearity, and interpretation of results. The primary focus is practical, emphasizing the application of multiple regression techniques using statistical software. Participants are expected to have a working knowledge of the software and a solid foundation in intermediate statistics.

  • Understand data requirements and assumptions of the linear regression method
  • Prepare data for multiple regression
  • Centering predictor variables
  • Dummy coding categorical predictor variables
  • Apply variable transformations for skewed dependent variables
  • Explore bivariate relationships between variables for diagnostic purposes
  • Select methods of entering multiple predictor variables that match research questions/research purpose
  • Performing linear regression analyses and interpreting regression outputs
  • Perform regression diagnostics and residual analysis 

Part V – Regression analysis for categorical data

This workshop covers statistical techniques for conducting regression analyses with binary, ordinal, and nominal (multicategorical) outcome variables. Topics include assumption checking and diagnostics, interaction effects, predictor selection methods, estimation of model coefficients, collinearity, and residual analysis. The primary focus is practical, with an emphasis on applying these techniques using statistical software. Participants are expected to have a working knowledge of the software and a solid foundation in intermediate statistics.

  • For each of the following methods: binary logistic regression, ordinal regression, and multinomial regression
  • Understand data requirements and assumptions
  • Select methods of entering multiple predictor variables that match research questions/research purpose
  • Perform analyses and interpret statistical outputs
  • Clearly communicate the results of analyses 

Part VI – Data reduction and classification techniques

This course introduces participants to the fundamental concepts of principal component analysis (PCA), exploratory factor analysis (EFA), and cluster analysis. It will cover the conceptual and analytical similarities and differences between PCA and EFA, along with their practical applications. Various types of cluster analysis will be demonstrated, with guidance on selecting appropriate methods based on the properties of the data. The course emphasizes hands-on experience with data reduction and classification techniques, as well as interpreting statistical output. Participants are expected to have a working knowledge of statistical software and a solid foundation in intermediate statistics.

  • Understand data requirements and assumptions of PCA and EFA
  • Perform and interpret the results of PCA
  • Perform and interpret the results of EFA
  • Select cluster analysis method based on data properties
  • Perform and interpret the results of
    • Hierarchical cluster analysis
    • K-means cluster analysis
    • Two-step cluster analysis