AI Safety Fundamentals: Alignment BlueDot Impact
-
- Technology
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment
-
How to get feedback
Feedback is essential for learning. Whether you’re studying for a test, trying to improve in your work or want to master a difficult skill, you need feedback.The challenge is that feedback can often be hard to get. Worse, if you get bad feedback, you may end up worse than before.Original text:https://www.scotthyoung.com/blog/2019/01/24/how-to-get-feedback/Author: Scott YoungA podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
-
Public by default: How we manage information visibility at Get on Board
I’ve been obsessed with managing information, and communications in a remote team since Get on Board started growing. Reducing the bus factor is a primary motivation — but another just as important is diminishing reliance on synchronicity. When what I know is documented and accessible to others, I’m less likely to be a bottleneck for anyone else in the team. So if I’m busy, minding family matters, on vacation, or sick, I won’t be blocking anyone.This, in turn, gives everyone in the team the f...
-
Writing, Briefly
(In the process of answering an email, I accidentally wrote a tiny essay about writing. I usually spend weeks on an essay. This one took 67 minutes—23 of writing, and 44 of rewriting.)Original text:https://paulgraham.com/writing44.htmlAuthor:Paul GrahamA podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
-
Being the (Pareto) Best in the World
This introduces the concept of Pareto frontiers. The top comment by Rob Miles also ties it to comparative advantage.While reading, consider what Pareto frontiers your project could place you on.Original text:https://www.lesswrong.com/posts/XvN2QQpKTuEzgkZHY/being-the-pareto-best-in-the-worldAuthor:John WentworthA podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
-
How to succeed as an early-stage researcher: the “lean startup” approach
I am approaching the end of my AI governance PhD, and I’ve spent about 2.5 years as a researcher at FHI. During that time, I’ve learnt a lot about the formula for successful early-career research.This post summarises my advice for people in the first couple of years. Research is really hard, and I want people to avoid the mistakes I’ve made.Original text:https://forum.effectivealtruism.org/posts/jfHPBbYFzCrbdEXXd/how-to-succeed-as-an-early-stage-researcher-the-lean-startup#ConclusionAuthor:To...
-
Become a person who Actually Does Things
The next four weeks of the course are an opportunity for you to actually build a thing that moves you closer to contributing to AI Alignment, and we're really excited to see what you do!A common failure mode is to think "Oh, I can't actually do X" or to say "Someone else is probably doing Y." You probably can do X, and it's unlikely anyone is doing Y! It could be you!Original text:https://www.neelnanda.io/blog/become-a-person-who-actually-does-thingsAuthor:Neel NandaA podcast by BlueDot ...