52 Weeks of Cloud Noah Gift
-
- Technology
-
A weekly podcast on technical topics related to cloud computing including: MLOPs, LLMs, AWS, Azure, GCP, Multi-Cloud and Kubernetes.
-
π¦ Rust + π€ LLMs: π Supercharge MLOps & AI Ethics
π Discover the power of Rust for Machine Learning & LLMs!
π― Supercharge your MLOps with Rust's performance, safety & tooling
π‘ Explore ethical considerations around LLMs & generative AI
π Combine the best of Python & Rust for seamless interop
π€ Dive into cutting-edge Rust LLM workflows with CUDA, AWS, Hugging Face & more!
Revolutionize your Machine Learning journey with Rust! π¦ This in-depth video explores the immense potential of leveraging Rust's unparalleled performance, safety guarantees, and robust tooling ecosystem to create blazing-fast, secure, and scalable ML solutions. π
Discover how Rust empowers you to tackle the unique challenges of MLOps head-on, from binary deployments and concurrent programming to GPU acceleration with NVIDIA CUDA. π» Learn to seamlessly bridge the gap between Python's rich data science ecosystem and Rust's raw power using tools like PyO3 and Polars. π
Dive deep into the ethical considerations surrounding Large Language Models (LLMs) and generative AI, gaining invaluable insights to guide responsible development and deployment. π§ Explore cutting-edge Rust LLM workflows integrating AWS Lambda, Hugging Face, and more to push the boundaries of what's possible! π
Whether you're a seasoned Rustacean looking to dive into ML or a Python aficionado eager to harness Rust's potential, this video is your launchpad to the forefront of Machine Learning innovation. π Buckle up and get ready to supercharge your AI journey with Rust! π₯ -
Software Grows like Trees, Not Playgrounds
Building software is like growing a fig tree, not a playground. Playgrounds need upfront design; fig trees require dynamic adjustments.
-
Five Whys Fix Failures
Asking "Why?" five times gets to the root of problems. Servers crashed, no logs. Cheap storage, bad configs caused it.
-
The Pitfalls of Rigid Agile
Agile done wrong stifles progress. Keep it lightweight and balanced for best results.
-
Perpetuating Harm - How AI Can Reinforce and Spread Bias
Description:
We explore an alarming potential outcome with AI - taking biased data or engagement incentives around sensitive issues as input can lead to exponentially amplifying and propagating harm across populations. Without oversight, a focus strictly on profit could promote extremism, discrimination, and violence over time even if unintended initially.
Key problems span:
Creating bias from tainted datasets
Optimization driving selective exposure
Enriching engagement at the cost of marginalized groups
Social costs far exceeding private short term gains
Understanding these dynamics is key to policy reforms including algorithmic accountability and ethical AI training. Companies have an obligation to assess the true impact of their systems. -
Regulatory Entrepreneurship and AI - Changing Laws Through Market Domination
We explain the concept of "regulatory entrepreneurship" - where companies strategically penetrate markets in legal gray areas and leverage scale and public sentiment to rewrite laws in their favor over time. Key mechanisms like rapid user growth and lobbying are concerning when applied to transformative AI with potential downsides.
Using examples of firms like Airbnb and Uber's market domination leading to negative civic outcomes, we can imagine similar dynamics unfolding as generative models commercialize. There are always societal costs alongside the creator profits. Understanding these incentives allows preemptive policy to ensure balance.