Computer Vision Decoded

EveryPoint

A tidal wave of computer vision innovation is quickly having an impact on everyone's lives, but not everyone has the time to sit down and read through a bunch of news articles and learn what it means for them. In Computer Vision Decoded, we sit down with Jared Heinly, the Chief Scientist at EveryPoint, to discuss topics in today’s quickly evolving world of computer vision and decode what they mean for you. If you want to be sure you understand everything happening in the world of computer vision, don't miss an episode!

  1. 5월 14일

    Camera Types for 3D Reconstruction Explained

    In this episode of Computer Vision Decoded, hosts Jonathan Stephens and Jared Heinly explore the various types of cameras used in computer vision and 3D reconstruction. They discuss the strengths and weaknesses of smartphone cameras, DSLR and mirrorless cameras, action cameras, drones, and specialized cameras like 360, thermal, and event cameras. The conversation emphasizes the importance of understanding camera specifications, metadata, and the impact of different lenses on image quality. The hosts also provide practical advice for beginners in 3D reconstruction, encouraging them to start with the cameras they already own. Takeaways Smartphones are versatile and user-friendly for photography.RAW images preserve more data than JPEGs, aiding in post-processing.Mirrorless and DSLR cameras offer better low-light performance and lens flexibility.Drones provide unique perspectives and programmable flight paths for capturing images.360 cameras allow for quick scene capture but may require additional processing for 3D reconstruction.Event cameras capture rapid changes in intensity, useful for robotics applications.Thermal and multispectral cameras are specialized for specific applications, not typically used for 3D reconstruction.Understanding camera metadata is crucial for effective image processing.Choosing the right camera depends on the specific needs of the project.Starting with a smartphone is a low barrier to entry for beginners in 3D reconstruction.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io

    1시간 16분
  2. 2월 18일

    Exploring Depth Maps in Computer Vision

    In this episode of Computer Vision Decoded, Jonathan Stephens and Jared Heinly explore the concept of depth maps in computer vision. They discuss the basics of depth and depth maps, their applications in smartphones, and the various types of depth maps. The conversation delves into the role of depth maps in photogrammetry and 3D reconstruction, as well as future trends in depth sensing and machine learning. The episode highlights the importance of depth maps in enhancing photography, gaming, and autonomous systems. Key Takeaways: Depth maps represent how far away objects are from a sensor.Smartphones use depth maps for features like portrait mode.There are multiple types of depth maps, including absolute and relative.Depth maps are essential in photogrammetry for creating 3D models.Machine learning is increasingly used for depth estimation.Depth maps can be generated from various sensors, including LiDAR.The resolution and baseline of cameras affect depth perception.Depth maps are used in gaming for rendering and performance optimization.Sensor fusion combines data from multiple sources for better accuracy.The future of depth sensing will likely involve more machine learning applications.Episode Chapters00:00 Introduction to Depth Maps 00:13 Understanding Depth in Computer Vision 06:52 Applications of Depth Maps in Photography 07:53 Types of Depth Maps Created by Smartphones 08:31 Depth Measurement Techniques 16:00 Machine Learning and Depth Estimation 19:18 Absolute vs Relative Depth Maps 23:14 Disparity Maps and Depth Ordering 26:53 Depth Maps in Graphics and Gaming 31:24 Depth Maps in Photogrammetry 34:12 Utilizing Depth Maps in 3D Reconstruction 37:51 Sensor Fusion and SLAM Technologies 41:31 Future Trends in Depth Sensing 46:37 Innovations in Computational Photography This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io

    58분

평가 및 리뷰

5
최고 5점
5개의 평가

소개

A tidal wave of computer vision innovation is quickly having an impact on everyone's lives, but not everyone has the time to sit down and read through a bunch of news articles and learn what it means for them. In Computer Vision Decoded, we sit down with Jared Heinly, the Chief Scientist at EveryPoint, to discuss topics in today’s quickly evolving world of computer vision and decode what they mean for you. If you want to be sure you understand everything happening in the world of computer vision, don't miss an episode!

좋아할 만한 다른 항목