25 episodes

The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization.

Big data analytics and management are a multi-million dollar markets that grow constantly! The ability to control and utilize large amounts of data is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements.

In this lecture, we take a look at various technologies involved in building distributed, data-intensive systems. We start by discussing fundamental concepts in distributed computing, such das data models, encoding formats, messaging, data replication and partitioning, fault tollerance, and batch- and stream processing. In between, we consider different practical systems from the Big Data Landscape, such as Akka and Spark. In the end, we concentrate on data management aspects, such as distributed database management system architectures and distributed query optimization.

Distributed Data Management (ST 2021) - tele-TASK Dr. Thorsten Papenbrock

    • Education

The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization.

Big data analytics and management are a multi-million dollar markets that grow constantly! The ability to control and utilize large amounts of data is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements.

In this lecture, we take a look at various technologies involved in building distributed, data-intensive systems. We start by discussing fundamental concepts in distributed computing, such das data models, encoding formats, messaging, data replication and partitioning, fault tollerance, and batch- and stream processing. In between, we consider different practical systems from the Big Data Landscape, such as Akka and Spark. In the end, we concentrate on data management aspects, such as distributed database management system architectures and distributed query optimization.

    • video
    Lecture Summary

    Lecture Summary

    • 1 hr 57 min
    • video
    Federated DBMSS

    Federated DBMSS

    • 1 hr 16 min
    • video
    Stream Processing

    Stream Processing

    • 1 hr 20 min
    • video
    Exercise 1 Evaluation

    Exercise 1 Evaluation

    • 1 hr 29 min
    • video
    Spark Batch Processing (2)

    Spark Batch Processing (2)

    • 1 hr 37 min
    • video
    Spark Batch Processing

    Spark Batch Processing

    • 1 hr 29 min

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Rich Roll Podcast
Rich Roll
TED Talks Daily
TED
Mick Unplugged
Mick Hunt
The Skinny Confidential Him & Her Podcast
Lauryn Bosstick & Michael Bosstick / Dear Media

More by Hasso-Plattner-Institut für Digital Engineering gGmbH (HPI)

Distributed Data Management (WT 2018/19) - tele-TASK
Prof. Dr. Felix Naumann, Dr. Thorsten Papenbrock
Dependable Systems (SS 2014) - tele-TASK
Dr. Peter Tröger
Unternehmerisches Denken & Handeln (WS 2017/18) - tele-TASK
Prof. Dr. Katharina Hölzle
Mathematik I - Diskrete Strukturen und Logik (WS 2017/18) - tele-TASK
Prof. Dr. Christoph Meinel
Datenbanksysteme II (WS 2021/22) - tele-TASK
Prof. Dr. Felix Naumann
Neurodesign Lecture - Designing for Empathy in Business Contexts (Wintersemester 2021/2022) - tele-TASK
various lecturers