Skip to content

Open-Source Information Retrieval Courses @ TU Wien

License

Notifications You must be signed in to change notification settings

matanhol/AIR

 
 

Repository files navigation

Hi there 👋 Welcome to my teaching materials!

I'm working on Information Retrieval at the Vienna University of Technology (TU Wien), mainly focusing on the award-wining master-level Advanced Information Retrieval course. I try to create engaging, fun, and informative lectures and exercises – both in-person and online!

Please feel free to open up an issue or a pull request if you want to add something, find a mistake, or think something should be explained better!

Contents


Advanced Information Retrieval 2021 & 2022

🏆 Won the Best Distance Learning Award 2021 @ TU Wien

Information Retrieval is the science behind search technology. Certainly, the most visible instances are the large Web Search engines, the likes of Google and Bing, but information retrieval appears everywhere we have to deal with unstructured data (e.g. free text).

A paradigm shift. Taking off in 2019 the Information Retrieval research field began an enormous paradigm shift towards utilizing BERT-based language models in various forms to great effect with huge leaps in quality improvements for search results using large-scale training data. This course aims to showcase a slice of these advances in state-of-the-art IR research towards the next generation of search engines.


New in 2022: Use GitHub Discussions to ask questions about the lecture!


Syllabus The AIR syllabus overview

Lectures

In the following we provide links to recordings, slides, and closed captions for our lectures. Here is a complete playlist on YouTube.

Topic Description Recordings Slides Text
0: Introduction 2022 Infos on requirements, topics, organization YouTube PDF Transcript
1: Crash Course IR Fundamentals We explore two fundamental building blocks of IR: indexing and ranked retrieval YouTube PDF Transcript
2: Crash Course IR Evaluation We explore how we evaluate ranked retrieval results and common IR metrics (MRR, MAP, NDCG) YouTube PDF Transcript
3: Crash Course IR Test Collections We get to know existing IR test collections, look at how to create your own, and survey potential biases & their effect in the data YouTube PDF Transcript
4: Word Representation Learning We take a look at word representations and basic word embeddings including a usage example in Information Retrieval YouTube PDF Transcript
5: Sequence Modelling We look at CNNs and RNNs for sequence modelling, including the basics of the attention mechanism. YouTube PDF Transcript
6: Transformer & BERT We study the Transformer architecture; pre-training with BERT, the HuggingFace ecosystem where the community can share models; and overview Extractive Question Answering (QA). YouTube PDF Transcript
7: Introduction to Neural Re‑Ranking We look at the workflow (including training and evaluation) of neural re-ranking models and some basic neural re-ranking architectures. YouTube PDF Transcript
8: Transformer Contextualized Re‑Ranking We learn how to use Transformers (and the pre-trained BERT model) for neural re-ranking - for the best possible results and more efficient approaches, where we tradeoff quality for performance. YouTube PDF Transcript
9: Domain Specific Applications Guest lecture by @sophiaalthammer We learn how about different task settings, challenges, and solutions in domains other than web search. YouTube PDF Transcript
10: Dense Retrieval ❤ Knowledge Distillation We learn about the (potential) future of search: dense retrieval. We study the setup, specific models, and how to train DR models. Then we look at how knowledge distillation greatly improves the training of DR models and topic aware sampling to get state-of-the-art results. YouTube PDF Transcript

Neural IR & Extractive QA Exercise

In this exercise your group is implementing neural network re-ranking models, using pre-trained extractive QA models, and analyze their behavior with respect to our FiRA data.

📃 To the 2021 assignment

📃 To the 2022 assignment


Our Time-Optimized Content Creation Workflow for Remote Teaching

Our workflow creates an engaging remote learning experience for a university course, while minimizing the post-production time of the educators. We make use of ubiquitous and commonly free services and platforms, so that our workflow is inclusive for all educators and provides polished experiences for students. Our learning materials provide for each lecture: 1) a recorded video, uploaded on YouTube, with exact slide timestamp indices, which enables an enhanced navigation UI; and 2) a high-quality flow-text automated transcript of the narration with proper punctuation and capitalization, improved with a student participation workflow on GitHub. We automate the transformation and post-production between raw narrated slides and our published materials with custom tools.

Workflow Overview

Head over to our workflow folder for more information and our custom python-based transformation tools. Or check out our full paper for an in-depth evaluation of our methods published at the SIGCSE Technical Symposium 2022:

A Time-Optimized Content Creation Workflow for Remote Teaching Sebastian Hofstätter, Sophia Althammer, Mete Sertkan and Allan Hanbury https://arxiv.org/abs/2110.05601

About

Open-Source Information Retrieval Courses @ TU Wien

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 78.8%
  • Jupyter Notebook 16.7%
  • C# 4.5%