14 episodes

This series is host to episodes created by the Department of Computer Science, University of Oxford, one of the longest-established Computer Science departments in the country.

The series reflects this department's world-class research and teaching by providing talks that encompass topics such as computational biology, quantum computing, computational linguistics, information systems, software verification, and software engineering.

Computer Science Oxford University

    • Courses

This series is host to episodes created by the Department of Computer Science, University of Oxford, one of the longest-established Computer Science departments in the country.

The series reflects this department's world-class research and teaching by providing talks that encompass topics such as computational biology, quantum computing, computational linguistics, information systems, software verification, and software engineering.

    • video
    Medicine and Physiology in the Age of Dynamics

    Medicine and Physiology in the Age of Dynamics

    Medicine and Physiology in the Age of Dynamics: Newton Abraham Lecture 2020 Lecture by Professor Alan Garfinkel (2019-2020 Newton Abraham Visiting Professor, University of Oxford, Professor of Medicine (Cardiology) and Integrative Biology and Physiology, University of California, Los Angeles)

    • 1 hr 9 min
    • video
    Can one Define Intelligence as a Computational Phenomenon?

    Can one Define Intelligence as a Computational Phenomenon?

    Can we build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. Strachey Lecture delivered by Leslie Valiant. Supervised learning is a cognitive phenomenon that has proved amenable to mathematical definition and analysis, as well as to exploitation as a technology. The question we ask is whether one can build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. We regard reasoning as the major component that needs to be added. We suggest that the central challenge therefore is to unify the formulation of these two phenomena, learning and reasoning, into a single framework with a common semantics. Based on such semantics one would aim to learn rules with the same success that predicates can be learned, and then to reason with them in a manner that is as principled as conventional logic offers. We discuss how Robust Logic fits such a role. We also discuss the challenges of exploiting such an approach for creating artificial systems with greater power, for example, with regard to common sense capabilities, than those currently realized by end-to-end learning.

    • 1 hr 5 min
    • video
    Strachey Lecture - Doing for our robots what evolution did for us

    Strachey Lecture - Doing for our robots what evolution did for us

    Professor Leslie Kaelbling (MIT) gives the 2019 Stachey lecture. The Strachey Lectures are generously supported by OxFORD Asset Management. We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

    • 55 min
    • video
    Strachey Lecture - Steps Towards Super Intelligence

    Strachey Lecture - Steps Towards Super Intelligence

    Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? In his 1950 paper "Computing Machinery and Intelligence" Alan Turing estimated that sixty people working for fifty years should be able to program a computer (running at 1950 speed) to have human level intelligence. AI researchers have spent orders of magnitude more effort than that and are still not close. Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? This talk will discuss those steps we can take, what aspects we really still do not have much of a clue about, what we might be currently getting completely wrong, and why it all could be centuries away. Importantly the talk will make distinctions between research questions and barriers to technology adoption from research results, with a little speculation on things that might go wrong (spoiler alert: it is the mundane that will have the big consequences, not the Hollywood scenarios that the press and some academics love to talk about).

    • 58 min
    • video
    Strachey Lecture - Privacy-preserving analytics in, or out of, the cloud

    Strachey Lecture - Privacy-preserving analytics in, or out of, the cloud

    This talk is about the experience of providing privacy when running analytics on users’ personal data. The two-sided market of Cloud Analytics emerged almost accidentally, initially from click-through associated with user's response to search results, and then adopted by many other services, whether web mail or social media. The business model seen by the user is of a free service (storage and tools for photos, video, social media etc). The value to the provider is untrammeled access to the user's data over space and time, allowing upfront income from the ability to run recommenders and targeted adverts, to background market research about who is interested in what information, goods and services, when and where. The value to the user is increased personalisation. This all comes at a cost, both of privacy (and the risk of loss of reputation or even money) for the user, and at the price of running highly expensive data centers for the providers, and increased cost in bandwidth and energy consumption (mobile network costs & device battery life). The attack surface of our lives expands to cover just about everything. This talk will examine several alternative directions that this will evolve in the future. Firstly, we look at a toolchain for traditional cloud processing which offers privacy through careful control of the lifecycle of access to data, processing, and production of results by combining several relatively new techniques. Secondly, we present a fully decentralized approach, on low cost home devices, which can potentially lead to large reduction in risks of loss of confidentiality. Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 1 hr
    • video
    Strachey Lecture - The Continuing Evolution of C++

    Strachey Lecture - The Continuing Evolution of C++

    Stroustrup discusses the development and evolution of the C++, one of the most widely used programming languages ever. The development of C++ started in 1979. Since then, it has grown to be one of the most widely used programming languages ever, with an emphasis on demanding industrial uses. It was released commercially in 1985 and evolved through one informal standard (“the ARM”) and several ISO standards: C++98, C++11, C++14, and C++17. How could an underfinanced language without a corporate owner succeed like that? What are the key ideas and design principles? How did the original ideas survive almost 40 years of development and 30 years of attention from a 100+ member standards committee? What is the current state of C++ and what is likely to happen over the next few years? What are the problems we are trying to address through language evolution?

    • 58 min

Top Podcasts In Courses

Listeners Also Subscribed To

More by Oxford University