TechFirst with John Koetsier John Koetsier
-
- News
-
Tech that is changing the world. Innovators who are shaping the future.
Deep discussions with diverse leaders from Silicon Valley giants and scrappy global startups. Plus some short monologues based on my Forbes columns.
-
Apple Vision Pro: future of surgical training?
Is the Apple Vision Pro the future of surgical training?
In this episode of TechFirst, host John Koetsier discusses the transformative impact of virtual reality (VR) on surgical training, highlighting the cost-efficiency and effectiveness of VR in reducing the learning curve for surgeons.
The conversation features Richard Vincent, CEO of Fundamental VR, who elaborates on how VR technology, particularly the Apple Vision Pro, is revolutionizing surgical education by offering rapid, repeatable training sessions without the logistical setbacks associated with traditional methods. They explore the hardware agnosticism of Fundamental VR's software, ensuring compatibility with various VR platforms, and delve into the new possibilities unlocked by the Apple Vision Pro's advanced features, including its intuitive control system, powerful compute capacity, and exceptional optics.
The discussion also touches on the incorporation of haptics for a more immersive training experience, the potential of VR for remote collaborative training, and the broader implications of VR technology in the medical field.
00:00 Unlocking the Future of Surgical Training with VR
01:15 The Cost-Effectiveness of VR in Surgical Training
03:13 Achieving Competence: The Role of VR in Surgery
04:45 Hardware From Oculus to Apple Vision Pro
07:04 The Revolutionary Apple Vision Pro in Surgical Training
10:35 The Power of Haptics: Enhancing VR Training with Physical Feedback
13:07 The Impact of Device Cost on VR Training Accessibility
14:34 Expanding Horizons: VR's Role in Remote Surgery Training
17:03 The Future of Medical Training and Collaboration with VR
18:48 Apple Vision Pro: A Game-Changer for Medical VR Applications
20:15 Closing Thoughts and Future Prospects -
AGI in 3 to 8 years
When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence ... the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?
According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.
We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.
We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet's evolution, and envisages the role of humans in a world where AI surpasses human intelligence.
00:00 Introduction to the Future of AI
01:28 Predicting the Timeline of Artificial General Intelligence
02:06 The Role of LLMs in the Path to AGI
05:23 The Impact of AI on Jobs and Economy
06:43 The Future of AI Development
10:35 The Role of Humans in a World with AGI
35:10 The Diverse Future of Human and Post-Human Minds
36:51 The Challenges of Transitioning to a World with AGI
39:34 Conclusion: The Future of AGI -
Oysters reporting water quality? Not science fiction!
Can you use sentinel oysters and other mollusks to track water quality near your cities, beaches, or the Great Barrier Reef?
Actually ... yes.
In this episode of TechFirst, host John Koetsier chats with the CEO of Moloscan, a company focused on bio-monitoring and protection of marine environments using live shellfish.
The company uses aquatic bivalves, such as oysters, mussels, or clams to monitor the environment. These mollusks, which are filter feeders, react to changes in water conditions, helping to detect pollution and other disruptions in water quality.
The discussion covers the technological developments and rigorous research necessary to map out the normal behaviour of these animals and provide accurate water quality ratings. They also discuss how this method is more efficient and environmentally friendly compared to traditional mechanical probes and lab tests.
The CEO shares examples of installations in varied environments, ranging from oil and gas platforms to diverse geographical locations from Quebec to Qatar.
00:00 Introduction to Sentinel Oysters and Water Quality Monitoring
00:55 Understanding the Concept of Biomonitoring
01:48 The Science Behind Mollusk Behavior and Detection
02:43 The Journey of Developing the Monitoring Device
04:24 Understanding the Sensitivity and Precision of Mollusks
05:12 The Role of Mollusks in Detecting Water Pollution
08:06 The Technical Aspects of Monitoring Mollusk Behavior
10:43 The Real-world Application of Mollusk Monitoring
15:34 The Challenges and Benefits of Using Mollusks as Sensors
22:51 The Potential for Expanding the Technique to Other Biomes
06:24 Conclusion: The Future of Biomonitoring -
Here's an all-wheel drive e-bike ... with ChatGPT
Do you need ChatGPT integrated into your new bike? How about an all-wheel drive bike? (OK: a 2-wheel drive ... but yeah, that's all-wheel drive!)
In this episode of TechFirst, host John Koetsier chats the CEO of Urtopia about their new AI-integrated 'smart bike with a mind'.
The e-bike market is predicted to grow to about $26 billion by 2028, but Dr. Owen Chang explains how Urtopia is taking a different approach by developing most parts in-house to create a fully integrated, software-enabled product. He says their AI features, like ChatGPT integration, makes e-bikes safer and more personalised. It can also provide assistance including directions, making the ride safer and more enjoyable. Utopia is further developing its own version of GPT based on GPT-5, refining its potential functionalities.
We also chat about the world's first e-bike that has drive motors on both wheels, providing more power and better traction.
00:00 Introduction and Welcome
01:06 Exploring the Fusion GT Bike
01:47 The Design and Development Process
03:53 The Power of Dual Motor and Dual Battery System
06:51 The Future of Bikes: ChatGPT Integration?
07:12 The Role of AI in Utopia's Bikes
07:38 The Vision of Utopia: A Bicycle with a Mind
16:48 The Future of Smart Devices and E-bikes
25:30 Conclusion: The Bike as a Wearable Device -
App store for your brain: reading brain waves to fix sleep, pain, learning
Can you deliver medical treatment by changing brainwaves instead of injecting drugs?
Elon Musk has recently implanted his first Neuralink into a human patient. But can we get neurotech medical treatment without drilling holes in our skulls?
Maybe ...
According to Element, a startup with roots in MIT, we can. And they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning. Today we're chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher, plus Dr. David Wang, co-founder and CTO, who has a PhD in AI from MIT.
This technology could potentially treat medical conditions ranging from sleep disorders and tremors to learning difficulties. We also discuss the future of medtech, envisioning an 'app store for the brain' where individualized treatments can be downloaded like apps, focusing on promoting the most optimized state of health for any given individual through real-time detection and diagnosis.
00:00 Intro to Neurotech and Neurostimulation
00:33 Welcome and Introduction of Guests
01:31 Understanding the Concept of Elemind's Neurotech Device
02:59 Exploring the Form Factor of the Device
04:23 How it works
07:28 Effectiveness and Impact of the Device
13:05 Future Plans and Vision for the Device
18:52 Potential and Impact of the Device on Healthcare
21:35 Conclusion and Final Thoughts -
Hacking reality: Apple Vision Pro and security
Can someone hack your reality if you're wearing an Apple Vision Pro?
In this episode of TechFirst, John Koetsier discusses the arrival of Apple's Vision Pro, a groundbreaking VR headset, and its associated privacy and security concerns with Synopsys principal security consultant Jamie Boote.
They chat about how the device's advanced sensor systems can map out user environments, posing potential risks and security threats if hacked. Koetsier and Boote also consider Apple's past experience with hardware security and predict potential vulnerabilities and threats that may accompany this new technology.
00:00 Introduction to Apple Vision Pro
00:23 Privacy and Security Concerns
02:02 Potential Threats and Vulnerabilities
03:27 The Impact of New Technology on Security
04:20 Trust in Apple's Security Measures
06:25 Predictions for Future Security Issues
07:46 The Evolution of Software and Security
13:35 Final Thoughts and Conclusion
Customer Reviews
Tech Fire 🔥🔥🔥
Get the latest & greatest innovative updates & ideas here.
- Bryan Chamberland (From Addict to Author) Book: Yes I can - 22 Success Secrets from inspiring people around the world.
All my babies are beautiful
Hey, this is my podcast, so take this review with a BIG grain of salt. I write at Forbes and have started doing my interviews live on social video. Then I extract the audio for this podcast.
Honestly, at first I kinda sucked. The audio quality wasn’t great, and I didn’t really know how to do a great podcast.
But what I do know is how to I interview people, and for this podcast I interviews the most interesting people in startups and tech about how they’re changing the world and building businesses.
Come along for the ride!