2 episodes

Computer Science is composed of many different areas of research, such as Algorithms, Programming Languages, and Cryptography. Each of these areas has its own problems of interest, publications of record, idioms of communication, and styles of thought.
Segfault is a podcast series that serves as a map of the field, with each episode featuring discussions about the core motivations, ideas and methods of one particular area, with a mix of academics ranging from first year graduate students to long tenured professors.
I’m your host, Soham Sankaran, the founder of Pashi, a start-up building software for manufacturing. I'm on leave from the PhD program in Computer Science at Cornell, where I work on distributed systems and robotics, and I started Segfault to be the guide to CS research that I desperately wanted when I was just starting out in the field.

Segfault with Soham Sankaran Honesty Is Best

    • Science
    • 5.0 • 1 Rating

Computer Science is composed of many different areas of research, such as Algorithms, Programming Languages, and Cryptography. Each of these areas has its own problems of interest, publications of record, idioms of communication, and styles of thought.
Segfault is a podcast series that serves as a map of the field, with each episode featuring discussions about the core motivations, ideas and methods of one particular area, with a mix of academics ranging from first year graduate students to long tenured professors.
I’m your host, Soham Sankaran, the founder of Pashi, a start-up building software for manufacturing. I'm on leave from the PhD program in Computer Science at Cornell, where I work on distributed systems and robotics, and I started Segfault to be the guide to CS research that I desperately wanted when I was just starting out in the field.

    Episode 2: Computer Vision with Professor Bharath Hariharan

    Episode 2: Computer Vision with Professor Bharath Hariharan

    Cornell Professor and former Facebook AI Researcher Bharath Hariharan joins me to discuss what got him into Computer Vision, how the transition to deep learning has changed the way CV research is conducted, and the still-massive gap between human perception and what machines can do.


    Consider subscribing via email to receive every episode and occasional bonus material in your inbox.


    Soham Sankaran’s Y Combinator-backed startup, Pashi, is recruiting a software engineer to do research-adjacent work in programming languages and compilers. If you’re interested, email soham [at] pashi.com for more information.


    Go to transcript

    Note: If you’re in a podcast player, this will take you to the Honesty Is Best website to view the full transcript. Some players like Podcast Addict will load the whole transcript with time links below the Show Notes, so you can just scroll down to read the transcript without needing to click the link. Others like Google Podcasts will not show the whole transcript.


    Show notes

    Participants:


    Soham Sankaran (@sohamsankaran) is the founder of Pashi, and is on leave from the PhD program in Computer Science at Cornell University.


    Professor Bharath Hariharan is an Assistant Professor in the Department of Computer Science at Cornell University. He works on recognition in Computer Vision.


    Material referenced in this podcast:


    ‘Building Rome in a Day’, a project to construct a 3D model of Rome using photographs found online from the Univeristy of Washington’s Graphics and Imaging Lab (Grail): project website, original paper by Sameer Agarwal, Noah Snavely, Ian Simon, Steven M. Seitz, and Richard Szeliski in ICCV 2009.


    The Scale-Invariant Feature Transform (SIFT) algorithm: wikipedia, original paper by David G. Lowe in ICCV 1999.


    The Perceptron: wikipedia, original paper by Cornell’s own Frank Rosenblatt in Psychological Review Vol. 65 (1958). Rosenblatt was a brilliant psychologist with exceptionally broad research interests across the social sciences, neurobiology, astronomy, and engineering. The perceptron, which is a forerunner of much of modern artificial intelligence, initially received great acclaim in academia and the popular press for accomplishing the feat of recognizing triangular shapes through training. In the 60s, however, legendary computer scientists Marvin Minsky (a high-school classmate of Rosenblatt’s) and Seymour Papert released a book, Perceptrons, that made the argument that the perceptron approach to artificial intelligence would fail at more complex tasks, resulting in it falling out of fashion for a few decades in favour of Minsky’s preferred approach, Symbolic AI. Symbolic AI famously failed to produce tangible results, resulting in the AI winter of the 80s and 90s, a fallow period for funding and enthusiasm. Rosenblatt, meanwhile, died in a boating accident in 1971 at the relatively young age of 43, 40 years too early to see himself vindicated in the battle between Minsky’s Symbolic AI and what we now call Machine Learning.


    Bharath’s CVPR 2015 paper Hypercolumns for Object Segmentation and Fine-grained Localization with Pablo Arbeláez, Ross Girshick, and Jitendra Malik, in which information pulled from the middle layers of a convolutional neural network (CNN) trained for object recognition was used to establish fine-grained boundaries for objects in an image.


    ImageNet, originally created by then Princeton (now Stanford) Professor Fei-Fei Li and her group in 2009: A vast database of images associated with common nouns (table, badger, ocean, etc.). The high quality & scale of this dataset, combined with the vigorous competition between groups of researchers to top the ImageNet benchmarks, fuelled massive advances in object recognition over the last decade.


    Credits:


    Created and hosted by Soham Sankaran.


    Mixed and Mastered by Varun Patil (email

    • 54 min
    Episode 1: Programming Languages

    Episode 1: Programming Languages

    Adrian Sampson, Alexa VanHattum, and Rachit Nigam of Cornell’s Capra group join me to discuss their differing perspectives on what the research field of Programming Languages (PL) is really about, what successful PL research looks like, and what got them interested in working in the area in the first place. We also talk about some of their recent research work on programming languages for hardware accelerator design.


    Consider subscribing via email to receive every episode and occasional bonus material in your inbox.


    Soham Sankaran’s Y Combinator-backed startup, Pashi, is recruiting a software engineer to do research-adjacent work in programming languages and compilers. If you’re interested, email soham [at] pashi.com for more information.


    Go to transcript

    Note: If you’re in a podcast player, this will take you to the Honesty Is Best website to view the full transcript. Some players like Podcast Addict will load the whole transcript with time links below the Show Notes, so you can just scroll down to read the transcript without needing to click the link. Others like Google Podcasts will not show the whole transcript.


    Show notes

    Participants:


    Soham Sankaran (@sohamsankaran) is the founder of Pashi, and is on leave from the PhD program in Computer Science at Cornell University.


    Adrian Sampson (@samps) is an Assistant Professor in the Department of Computer Science at Cornell University. He works on programming languages and computer architecture.


    Alexa VanHattum (@avanhatt) is a PhD student in CS at Cornell who is advised by Adrian Sampson. She works on systems programming languages, applied formal methods, and usability for programming tools


    Rachit Nigam (@notypes) is a PhD student in CS at Cornell who is also advised by Adrian Sampson. He is interested in tools and languages for hardware design.


    Adrian, Alexa, and Rachit are all part of the Capra Group (Computer Architecture and Programming Abstractions) at Cornell, which Adrian directs.


    Material referenced in this podcast:


    Logic for Hackers/Logic for Systems at Brown University: 2014 edition, 2020 edition.


    The Structure and Interpretation of Programming Languages, commonly known as SICP, which is an influential introductory textbook based on MIT’s original Introduction to Computer Science course, taught in the functional programming language Scheme. It formed the basis for introductory CS courses across the world, and though most such courses have been phased out, Soham’s intro to CS course at Yale, CPSC 201, continues its legacy to this day.


    Higher-order functions: functions that take functions as input and/or return functions as results.


    Adrian’s 2010 OOPSLA paper Composable specifications for structured shared-memory communication. Full text of the paper available here.


    Rachit’s Dahlia paper from PLDI 2020: Predictable Accelerator Design with Time-Sensitive Affine types.


    Very High Speed Integrated Circuit Hardware Description Language, better known as VHDL: a hardware description language to specify digital circuits.


    Hans Boehm’s 2005 PLDI paper Threads cannot be implemented as a Library.


    Flatt et. al.’s Programming Languages as Operating Systems from ICFP 1999. This paper was alternatively titled Revenge of the Son of the Lisp Machine.


    Caroline Trippel’s 2018 MICRO-51 paper CheckMate: Automated Synthesis of Hardware Exploits and Security Litmus Tests.


    Daniel Jackson’s Alloy, an “open source language and analyzer for software modeling”.


    John R. Ellis’ 1985 Yale PhD thesis – Bulldog: A Compiler for VLIW Architectures.


    Credits:


    Created and hosted by Soham Sankaran.


    Mixed and Mastered by Varun Patil (email).


    Transcribed by Pratika Prabhune, Soham Sankaran, and Eric Lu.


    Transcript

    [00:00:00]

    Adrian: There’s something about programming languages – it’s like a lens of looking at proble

    • 1 hr 16 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Science

Listeners Also Subscribed To