16 episodes

This series is host to episodes created by the Department of Computer Science, University of Oxford, one of the longest-established Computer Science departments in the country.

The series reflects this department's world-class research and teaching by providing talks that encompass topics such as computational biology, quantum computing, computational linguistics, information systems, software verification, and software engineering.

Computer Science Oxford University

    • Education

This series is host to episodes created by the Department of Computer Science, University of Oxford, one of the longest-established Computer Science departments in the country.

The series reflects this department's world-class research and teaching by providing talks that encompass topics such as computational biology, quantum computing, computational linguistics, information systems, software verification, and software engineering.

    • video
    Strachey Lecture: Getting AI Agents to Interact an Collaborate with Us on Our Terms

    Strachey Lecture: Getting AI Agents to Interact an Collaborate with Us on Our Terms

    As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. At a minimum, AI agents need approximations of human’s task and goal models, as well as the human’s model of the AI agent’s task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. Using several case-studies from our ongoing research, I will discuss how such multi-model reasoning forms the basis for explainable behavior in human-aware AI systems.

    • 1 hr 14 min
    • video
    Strachey Lecture: How Innovation Works: Serendipity, Energy and the Saving of Time

    Strachey Lecture: How Innovation Works: Serendipity, Energy and the Saving of Time

    Innovation is the main event of the modern age, the reason we experience both dramatic improvements in our living standards and unsettling changes in our society. Innovation is the main event of the modern age, the reason we experience both dramatic improvements in our living standards and unsettling changes in our society. Forget short-term symptoms like Donald Trump and Brexit, it is innovation itself that explains them and that will itself shape the 21st century for good and ill. Yet innovation remains a mysterious process, poorly understood by policy makers and businessmen, hard to summon into existence to order, yet inevitable and inexorable when it does happen.

    • 55 min
    • video
    Medicine and Physiology in the Age of Dynamics

    Medicine and Physiology in the Age of Dynamics

    Medicine and Physiology in the Age of Dynamics: Newton Abraham Lecture 2020 Lecture by Professor Alan Garfinkel (2019-2020 Newton Abraham Visiting Professor, University of Oxford, Professor of Medicine (Cardiology) and Integrative Biology and Physiology, University of California, Los Angeles)

    • 1 hr 9 min
    • video
    Can one Define Intelligence as a Computational Phenomenon?

    Can one Define Intelligence as a Computational Phenomenon?

    Can we build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. Strachey Lecture delivered by Leslie Valiant. Supervised learning is a cognitive phenomenon that has proved amenable to mathematical definition and analysis, as well as to exploitation as a technology. The question we ask is whether one can build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. We regard reasoning as the major component that needs to be added. We suggest that the central challenge therefore is to unify the formulation of these two phenomena, learning and reasoning, into a single framework with a common semantics. Based on such semantics one would aim to learn rules with the same success that predicates can be learned, and then to reason with them in a manner that is as principled as conventional logic offers. We discuss how Robust Logic fits such a role. We also discuss the challenges of exploiting such an approach for creating artificial systems with greater power, for example, with regard to common sense capabilities, than those currently realized by end-to-end learning.

    • 1 hr 5 min
    • video
    Strachey Lecture - Doing for our robots what evolution did for us

    Strachey Lecture - Doing for our robots what evolution did for us

    Professor Leslie Kaelbling (MIT) gives the 2019 Stachey lecture. The Strachey Lectures are generously supported by OxFORD Asset Management. We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

    • 55 min
    • video
    Strachey Lecture - Steps Towards Super Intelligence

    Strachey Lecture - Steps Towards Super Intelligence

    Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? In his 1950 paper "Computing Machinery and Intelligence" Alan Turing estimated that sixty people working for fifty years should be able to program a computer (running at 1950 speed) to have human level intelligence. AI researchers have spent orders of magnitude more effort than that and are still not close. Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? This talk will discuss those steps we can take, what aspects we really still do not have much of a clue about, what we might be currently getting completely wrong, and why it all could be centuries away. Importantly the talk will make distinctions between research questions and barriers to technology adoption from research results, with a little speculation on things that might go wrong (spoiler alert: it is the mundane that will have the big consequences, not the Hollywood scenarios that the press and some academics love to talk about).

    • 58 min

Top Podcasts In Education

Listeners Also Subscribed To

More by Oxford University