59 episodes

Guest Interviews, discussing the possibilities and potential of AI in Austria.

Question or Suggestions, write to austrianaipodcast@pm.me

Austrian Artificial Intelligence Podcast Manuel Pasieka

    • Technology

Guest Interviews, discussing the possibilities and potential of AI in Austria.

Question or Suggestions, write to austrianaipodcast@pm.me

    57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

    57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

    Hello and welcome back to the AAIP



    This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks.



    In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's.



    In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth.



    We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky.



    Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference.



    Enjoy.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ### References

    Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

    Neural Magic: https://neuralmagic.com/

    IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

    • 46 min
    56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2

    56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2

    Hello and welcome back to the AAIP



    If you are an active Machine Learning engineer or are simply interested in Large Language models, I am sure you have seen the discussions around quantized models and all kind of new frameworks that have appeared recently and achieve astonishing inference performance of LLM's on consumer devices.



    If you are curious how modern Large Language Models with their billions of parameters can run on a simple laptop or even an embedded device, than this episode is for you.



    Today I am talking to Eldar Kurtic, researcher in the Alistarh group at the IST in lower Austrian and senior research engineer at the American startup Neural Magic.



    Eldar's research focuses on optimizing Inference of Deep Neural Networks. On the show he is going to explain in depth show sparsity and quantization works, and how they can be applied to accelerate inference of big models, like LLM's on devices with limited resources.



    Because of the length of the interview, I decided to split it into two parts.



    This one, the first part, is going to focus on sparsity to reduce model size and enable faster inference by reducing the amount of memory and compute that is needed to store and run models.

    The second part is going to focus on quantization as a mean to find representations of models with lower numeric precision that require less memory to store and process, while retaining accuracy.



    In this first part about sparsity, Eldar will explain fundamental concepts like structured and unstructured sparsity. How and why they work and how currently we can achieve performant inference of unstructured sparsity only on CPU's and far less on GPU's.



    We will discuss how to achieve crazy numbers of up to 95% unstructured sparsity while retaining the accuracy of models, but why it is difficult to leverage this under quoutes, reduction in model size, to actually accelerate model inference.



    Enjoy.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ### References

    Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

    Neural Magic: https://neuralmagic.com/

    IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

    • 51 min
    55. Veronika Vishnevskaia - Ontec - Building RAG based Question-Answering Systems

    55. Veronika Vishnevskaia - Ontec - Building RAG based Question-Answering Systems

    ## Summary

    Today on the show I am talking to Veronika Vishnevskaia. Solution Architect at ONTEC where she specialises in building RAG based Question-Answering systems.



    Veronika will provide a deep dive into all relevant steps to build a Question-Answering system. Starting from data extraction and transformation, followed by text embedding, chunking and hybrid retrieval to strategies and last but not least methods to mitigate hallucinations of LLMs during the answer creation.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ## TOC

    00:00:00 Beginning

    00:03:33 Guest Introduction

    00:08:51 Building Q/A Systems for businesses

    00:16:27 RAG: Data extraction & pre-processing

    00:26:08 RAG: Chunking & Embedding

    00:36:13 RAG: Information Retrieval

    00:48:59 Hallucinations

    01:02:21 Future RAG systems



    ## Sponsors

    - Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/

    - Belichberg GmbH: Software that Saves the Planet: The Future of Energy Begins Here - https://belichberg.com/



    ### References

    Veronika Vishnevskaia - https://www.linkedin.com/in/veronika-vishnevskaia/

    Ontec - www.ontec.at

    Review Hallucination Mitigation Techniques: https://arxiv.org/pdf/2401.01313.pdf

    Aleph-Alpha: https://aleph-alpha.com/de/technologie/

    • 1 hr 10 min
    54. Manuel Reinsperger - MLSec & LLM Security

    54. Manuel Reinsperger - MLSec & LLM Security

    # Summary

    Today on the show I am talking to Manuel Reinsperger, Cybersecurity Expert and Penetration Tester. Manuel will provide us an introduction into the topic of Machine Learning Security with an emphasis on Chatbot and Large Language Model security.



    We are going to discuss topics like AI Red Teaming that focuses on identifying and testing AI systems within an holistic approach for system security. Another major theme of the episode are different Attack Scenarios against Chatbots and Agent systems.



    Manuel will explain to use, what Jailsbreak are and methods to exfiltrate information and cause harm through direct and indirect prompt injection.



    Machine Learning security is a topic I am specially interested in and I hope you are going to enjoy this episode and find it useful.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ## TOC

    00:00:00 Beginning

    00:02:05 Guest Introduction

    00:05:16 What is ML Security and how does it differ from Cybersecurity?

    00:25:56 Attacking chatbot systems

    00:41:12 Attacking RAGs with Indirect prompt injection

    00:54:43 Outlook on LLM security





    ## Sponsors

    - Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/

    - Belichberg GmbH: Software that Saves the Planet: The Future of Energy Begins Here - https://belichberg.com/



    ## References

    Manuel Reinsperger - https://manuel.reinsperger.org/

    Test your prompt hacking skills: https://gandalf.lakera.ai/

    Hacking Bing Chat: https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethinInjectGPT: k-large-language-models-now-6212aca0581a

    AI-Attack Surface: https://danielmiessler.com/blog/the-ai-attack-surface-map-v1-0/

    https://blog.luitjes.it/posts/injectgpt-most-polite-exploit-ever/

    https://github.com/jiep/offensive-ai-compilation

    AI Security Reference List: https://github.com/DeepSpaceHarbor/Awesome-AI-Security

    Prompt Injection into GPT: https://kai-greshake.de/posts/puzzle-22745/

    • 1 hr 5 min
    53. Peter Jeitscko - Impact of EU AI Regulation on AI startups

    53. Peter Jeitscko - Impact of EU AI Regulation on AI startups

    ## Summary

    At the end of last year, the EU-AI Act was finalized and it spawned many discussions and a lot of doubts about the future of European AI companies.



    Today on the show Peter Jeitschko, founder of JetHire an AI based recruiting platform that uses Large Language models to help recruiters find and work with candidates, talks about this perspective on the AI-Act.



    We talk about the impact of the EU AI-Act on their platform, and how it falls into a high-risk use-case under the new regulation. Peter describes how the AI-Act forced them to create their company in the US and what he believes are the downsides of the EU regulation.



    He describes his experience, that the EU regulations hinder innovation in Austria and Europe and how it increases legal costs and uncertainty, resulting in decision makers shying away in building and applying modern AI systems.



    I think this episode is valuable for decision makers and founders of AI companies, that are affected by the upcoming AI Act and struggle to make sense of it.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ## TOC

    00:00:00 Beginning

    00:03:09 Guest Introduction

    00:04:45 A founders perspective on the AI Act

    00:13:45 JetHire - A recruiting platform affected the the AI Act

    00:19:58 Achieving regulatory goals with good engineering

    00:35:22 The mismatch between regulations and real world applications

    00:48:12 European regulations vs. global AI services



    ## Sponsors

    - Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/

    - Belichberg GmbH: Software that Saves the Planet: The Future of Energy Begins Here - https://belichberg.com/



    ## References

    Peter Jeitschko - https://www.linkedin.com/in/pjeitschko/

    Peter Jeitschko - https://peterjeitschko.com/

    JetHire - https://jethire.ai/

    https://www.holisticai.com/blog/requirements-for-high-risk-ai-applications-overview-of-regulations

    • 57 min
    52. Markus Keiblinger - Texterous - Building custom LLM Solutions

    52. Markus Keiblinger - Texterous - Building custom LLM Solutions

    # Summary

    For the last two years AI has been flooded with news about LLMs and their successes, but how many companies are actually making use of them in their products and services?

    Today on the show I am talking to Markus Keiblinger, Managing partner of Texterous. A startup that focus on building custom LLM Solutions to help companies automate their business.

    Markus will tell us about his experience when talking and working with companies building such LLM focused solutions.

    Telling us about the expectations companies have on the capabilities of LLMs, as well on what companies need to have in order to be successfully implementing LLM projects.

    We will discuss how Textorous has successfully focused on Retriever Augmented Generation (RAG) use cases.

    RAGs is a mechanism that makes it possible to provide information to an LLM in a controlled menner, so the LLM can answer questions or follow instructions making use of that information. This enables companies to make use of their data to solve problems with LLMs, without having to train or even fine-tune models. On the show, Markus will tell us of one of these RAG projects and we will contrast building a RAG system based on Service Provider offerings like OpenAI or self hosted open source alternatives.

    Last but not least, we talk about new use cases emerging with multi-modal Models, and the long term perspective that exists for custom LLM Solutions Providers like them in focusing on building integrated solutions.



    ## AAIP Community

    Join our discord server and ask guest directly or discuss related topics with the community.

    https://discord.gg/5Pj446VKNU



    ## TOC

    00:00:00 Beginning

    00:03:31 Guest Introduction

    00:06:40 Challenges of applying AI in medical applications

    00:17:56 Homogeneous Ensemble Methods

    00:25:50 Combining base model predictions

    00:40:14 Composing Ensembles

    00:52:24 Explainability of Ensemble Methods



    ## Sponsors

    - Quantics: Supply Chain Planning for the new normal - the never normal - https://quantics.io/

    - Belichberg GmbH: Software that Saves the Planet: The Future of Energy Begins Here - https://belichberg.com/



    ### References

    - Markus Keiblinger: https://www.linkedin.com/in/markus-keiblinger

    - Texterous: https://texterous.com

    - Book: Conversations Plato Never Captured - but an AI did: https://www.amazon.de/Conversations-Plato-Never-Captured-but/dp/B0BPVS9H9R/

    • 46 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Search Engine
PJ Vogt, Audacy, Jigsaw
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
TED Radio Hour
NPR

You Might Also Like