UBC’s Vancouver campus Master of Data Science program covers all stages of the value chain, with an emphasis on the skills required to apply meaning to data. Over 10 months, you will learn how to extract data for use in experiments, how to apply state-of-the-art techniques in data analysis, and how to present your findings effectively to domain experts.
Highlights Across All MDS Programs:
- 10-month, full-time, accelerated program offers a short-term commitment for long-term gain
- Condensed one-credit courses allow for in-depth focus on a limited set of topics at one time
- Capstone project gives students an opportunity to apply their skills
- Real-world data sets are integrated in all courses to provide practical experience across a range of domains
Highlights Specific To Vancouver Campus Option:
- Curriculum designed by combined computer science and statistics experts with input from local industry
- Courses are taught by joint faculty from computer science and statistics departments to give students a broader skill set
- A cosmopolitan city, sprawling campus, and a cohort of up to 100 students, offer an engaging, culturally enriched university experience
- Strong connections with industry partners in public and private sectors, start-ups, and leading tech companies offer a wide range of networking/career opportunities
- 100% placement rate
The program structure includes 24 one-credit courses offered in four-week segments. Courses are lab-oriented and delivered in-person with some blended online content.
At the end of the six segments, an eight-week capstone project is also included, allowing students to apply their newly acquired knowledge, while working alongside other students with real-life data sets.
Fall: September - December
Block 1 (4 weeks)
How to install, maintain, and use the data scientific software “stack”. The Unix operating system, integrated development environments, and problem solving strategies.
How to present and interpret data science findings. Drawing on the scholarship of language and cognition, this course is about how effective data scientists write, speak, and think.
Fundamental concepts in probability. Statistical view of data coming from a probability distribution.
Block 2 (4 weeks)
Converting data from the form in which it is collected to the form needed for analysis. How to clean, filter, arrange, aggregate, and transform diverse data types, e.g. strings, numbers, and date-times.
Exploratory data analysis. Design of effective static visualizations. Plotting tools in R and Python.
How to choose and use appropriate algorithms and data structures to help solve data science problems. Key concepts such as recursion and algorithmic complexity (e.g., efficiency, scalability).
The statistical and probabilistic foundations of inference, developed jointly through mathematical derivations and simulation techniques. Important distributions and large sample results. Methods for dealing with the multiple testing problem. The frequentist paradigm.
Block 3 (4 weeks)
Linear models for a quantitative response variable, with multiple categorical and/or quantitative predictors. Matrix formulation of linear regression. Model assessment and prediction.
Introduction to supervised machine learning, with a focus on classification. K-NN, Decision trees, SVM, how to combine models via ensembling: boosting, bagging, random forests. Basic machine learning concepts such as generalization error and overfitting.
How to work with data stored in relational database systems. Storage structures and schemas, data relationships, and ways to query and aggregate such data.
How to make principled and effective choices with respect to marks, spatial arrangement, and colour. Analysis, design, and implementation of interactive figures. How to provide multiple views, deal with complexity, and make difficult decisions about data reduction.
Winter: January - April
Block 4 (4 weeks)
Useful extensions to basic regression, e.g., generalized linear models, mixed effects, smoothing, robust regression, and techniques for dealing with missing data.
How to evaluate and select features and models. Cross-validation, ROC curves, feature engineering, and regularization.
Interactive vs. scripted/unattended analyses and how to move fluidly between them. Reproducibility through automation and dynamic, literate documents. The use of version control and file organization to enhance machine- and human-readability.
Introduction to optimization. Gradient descent and stochastic gradient descent. Roundoff error and finite differences. Neural networks and deep learning.
Block 5 (4 weeks + 1 week break)
How to find groups and other structure in unlabeled, possibly high dimensional data. Dimension reduction for visualization and data analysis. Clustering, association rules, model fitting via the EM algorithm.
Model fitting and prediction in the presence of correlation due to temporal and/or spatial association. ARIMA models.
Block 6 (4 weeks)
The legal, ethical, and security issues concerning data, including aggregated data. Proactive compliance with rules and, in their absence, principles for the responsible management of sensitive data. Case studies.
Advanced machine learning methods, with an undercurrent of natural language processing (NLP) applications. Bag of words, recommender systems, topic models, natural language as sequence data, Markov chains, and RNNs for text synthesis. An introduction to popular NLP libraries in Python.
Statistical evidence from randomized experiments versus observational studies. Applications of randomization, e.g., A/B testing for website optimization.
Spring: May - June
Capstone Project (8-10 Weeks)
A mentored group project based on real data and questions from a partner within or outside the university. Students will formulate questions and design and execute a suitable analysis plan. The group will work collaboratively to produce a project report, presentation, and possibly other products, such as a web application.