On Tech & Vision With Dr. Cal Roberts

Lighthouse Guild
On Tech & Vision With Dr. Cal Roberts

Dr. Cal Roberts, President and CEO of Lighthouse Guild, the leading provider of exceptional services that inspire people who are visually impaired to attain their goals, interviews inventors, developers and entrepreneurs who have innovative tech ideas and solutions to help improve the lives of people with vision loss.

  1. SEP 17

    BenVision: Navigating with Music

    This podcast is about big ideas on how technology is making life better for people with vision loss. When it comes to navigation technology for people who are blind or visually impaired, many apps utilize voice commands, loud tones or beeps, or haptic feedback. In an effort to create a more natural, seamless experience, the team at BenVision has created a different type of system that allows users to navigate using musical cues instead! For this episode, Dr. Cal spoke with BenVision’s CEO and co-founder, Patrick Burton, along with its Technology Leadd, Aaditya Vaze. They shared about the inspiration behind BenVision, how they’re able to create immersive soundscapes that double as navigation aids, and the exciting future applications this technology could offer. The episode also features BenVision’s other co-founder and Audio Director, Soobin Ha. Soobin described her creative process for designing BenVision’s soundscapes, how she harnesses the power of AI, and her bold vision of what’s to come. Lighthouse Guild volunteer Shanell Matos tested BenVision herself and shares her thoughts on the experience. As you’ll hear, this technology is transformative!   The Big Takeaways Why Music? Navigation technology that uses voice, tone, or haptics can create an added distraction for some users. But the brain processes music differently. Instead of overloading the senses, for some users music works alongside them, allowing them to single out separate sound cues, or take in the entire environment as a whole. Like how the different instruments correspond to various characters in “Peter and the Wolf,” BenVision assigns unique musical cues to individual objects. User Experience: Shanell Matos appreciated how BenVision blends in more subconsciously, allowing her to navigate a space without having to be as actively engaged with the process. Additional Applications: BenVision began as an augmented reality program, and its creators see a potential for it to grow beyond a navigational tool to expand for use by people who are visually impaired or fully sighted. For example, it could be used to create unique soundscapes for museums, theme parks, and more, augmenting the experience in exciting new ways. The Role of AI: Artificial Intelligence already plays a big role in how BenVision works, and its creators see it being even more important in the future. BenVision already harnesses AI for object detection and its companion app uses AI to provide instant voice support about the immediate surroundings if needed. Moving forward, AI could be used to help instantaneously generate new sound cues or to help users customize their experience at the press of a button.   Tweetables “We thought that if the human brain can learn echolocation and we have this amazing technology that’s available to us in the modern day, then why can’t we make echolocation a little bit more intuitive and perhaps a little bit more pleasant.” — Patrick Burton, BenVision CEO & Co-Founder “You can think of it like a bunch of virtual speakers placed at different locations around the user. So like a speaker on a door or a couch or a chair. And then there are sounds coming from all these virtual speakers at the same time.” — Aaditya Vaze, BenVision Technology Lead “I want to gamify this idea so that the user can actually find some interest and joy by using it, rather than just find it only helpful, but also [to create ] some pleasant feeling.” — Soobin Ha, BenVision Audio Director & Co-Founder “So if there’s a lot of people, there’s a lot of conversations happening, a lot of sounds happening, a lot of movement happening. It’s really difficult to keep up with what everything is doing. Whereas with music, it’s not as difficult to pick out layers.” – Shanell Matos, Lighthouse Guild Volunteer   Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss.

    32 min
  2. JUL 12

    The Possibilities of Vision Restoration

    This podcast is about big ideas on how technology is making life better for people with vision loss. For hundreds of years, health professionals have dreamed of restoring vision for people who are blind or visually impaired. However, doing so, either through transplanting a functioning eye or using technological aids, is an incredibly complex challenge. In fact, many considered it impossible. But thanks to cutting-edge research and programs, the ability to restore vision is getting closer than ever. As a first for this podcast, this episode features an interview with Dr. Cal Roberts himself! Adapting audio from an interview on The Doctors Podcast, Dr. Cal describes his work as a program manager for a project on eye transplantation called Transplantation of Human Eye Allographs (THEA). Funded by a government initiative called ARPA-H, THEA is bringing some of the country’s finest minds together to tackle the complexities of connecting a person’s brain to an eye from a human donor. This episode also features an interview with Dr. Daniel Palanker of Stanford University. Dr. Palanker is working on technology that can artificially restore sight through prosthetic replacement of photoreceptors. Having proved successful in animals, Dr. Palanker and his team are working hard to translate it to humans. And if that can happen, then something once considered impossible could finally be accomplished!   The Big Takeaways The Challenges of Eye Transplants: Although eyeball transplants have been done, they’ve only been cosmetic. So far, nobody has been able to successfully connect a donor eyeball to a recipient’s brain. Dr. Roberts’s work with THEA is bringing together multiple teams to tackle the challenges associated with a whole eyeball transplant, from connecting nerves and muscles to ensuring the organ isn’t rejected, and much more. “Artificial” Vision Restoration: Dr. Palanker is working to replace the functions of photoreceptors through technological means. His photovoltaic array is placed underneath the retina and can convert light into an electrical current that activates the cells that send visual information to the brain. While it doesn’t completely restore sight for people with Age-Related Macular Degeneration, this technology shows incredible promise. Decoding “Brain Language”: For both Dr. Roberts and Dr. Palanker, one of the biggest challenges with vision restoration is understanding how the eye and brain communicate. Dr. Roberts likens it to Morse Code — the eye speaks to the brain in “dots and dashes,” which the brain then converts into vision. Right now, the language is still foreign to us, but we’re closer than ever to decoding it. The Evolution of the Brain-Machine Interface: Dr. Palanker imagines incredible possibilities in the interaction between the brain and technology. If we can find a way to truly translate the brain’s signals into information, Dr. Palanker envisions the possibility of direct brain-to-brain communication without verbalization. In a way, this could make people telepathic, able to understand and digest vast amounts of information in an instant.   Tweetables: So ideally in medicine, at least the ideal therapy is the restoration of full functionality. If we can grow back photoreceptors and make them reconnect to bipolar cells, undo all the rewiring that right now underwent during degeneration, and restore the full extent of vision, that would be the ideal outcome. — Dr. Daniel Palanker, Professor of Ophthalmology, Stanford University We can think about other aspects of brain-machine interface, which takes you maybe into the realm of capabilities that humans never had. If you enable artificial senses or enable brain-to-brain connectivity so you can communicate without verbalization that would open completely new capabilities that humanity never had. — Dr. Palanker Forty-two years after the implantation of the first mechanical heart, there’s not

    44 min
  3. APR 26

    Biosensors: The Future of Diagnostic Medicine

    This podcast is about big ideas on how technology is making life better for people with vision loss. This episode is about how biosensor technology is revolutionizing the field of diagnostic and preventive medicine. Biosensors can take many forms — wearable, implantable, and even ingestible. And they can serve many different functions as well, most notably when it comes to detecting the various pressure levels in our bodies. This episode features interviews with several luminaries working with biosensors. One of them is Doug Adams, a revolutionary entrepreneur who became inspired to create a biosensor that can assist in the treatment of glaucoma patients, initially focusing on a sensor for intraocular pressure. More recently, Doug founded a company called QURA, whose current efforts are focused on a biosensor that detects blood pressure. To elaborate on QURA’s initiatives, this episode also includes insights from its Chief Business Officer, David Hendren. He and Dr. Cal discuss the current state of biosensor technology, the benefits of implantable biosensors, and how they work. Finally, this episode includes a conversation with Max Ostermeier, co-founder and General Manager of Implandata Ophthalmic Products. Max was previously interviewed by Dr. Cal for the episode “Innovations in Intraocular Pressure and Closed Loop Drug Delivery Systems.” This time, Max joins Dr. Cal to discuss the possibilities of biosensor technology and his company’s Eyemate system — which includes biosensor technology for glaucoma patients. All three guests also offer their thoughts on the future of biosensors and their endless possibilities. While it may seem like science fiction, it truly is science reality!   The Big Takeaways What Biosensors Do: Currently, biosensors primarily sense the various pressures in the human body. QURA’s current sensor detects blood pressure and assists with hypertension. Meanwhile, Implandata’s Eyemate technology serves glaucoma patients by gathering data on intraocular pressure. The Rapid Shrinking of Biosensors: When Doug Adams first started working on biosensors, the model he saw was the size of a microwave. Now, it’s shrunk to the size of a grain of rice! By making biosensors smaller, they are easier to implant and place in different spots within the body. And by doing so, they can gather more and more data. The Benefits of AI: One drawback of gathering so much data is that it can sometimes be hard to analyze it. However, improvements in AI technology are making it easier to sort through all that data, giving doctors and patients valuable information for medical diagnostics and treatments. The Future of Biosensors: As implantable biosensors become smaller and more sophisticated, all our guests see them becoming a crucial part of healthcare. In addition to gathering data on all sorts of functions within the body, biosensors could provide therapies and treatments with minimal human intervention.   Tweetables: So, we are measuring the absolute pressure inside the eye with this kind of technology. It originates from the automotive industry. Tire pressure sensors, where you also have to measure the pressure inside the tire. And so basically we took set technology and advanced it and made it so small that you can also implant this kind of sensor in an eye. — Max Ostermeier, co-founder and General Manager of Implandata Ophthalmic Products So I had a physical a month ago, and along with the physical, they draw blood, and they send that blood off to a lab. I have a feeling in the next decade, that goes away. Why do you have to send a vial of blood to the lab? Because if I had a sensor, not even in an artery, but on top of an artery, I could do a complete analysis of everything in that blood that you’re doing from the lab. — Doug Adams, entrepreneur and founder of QURA The important thing is that you are automatically getting data to the care group that is taking care of these patients, w

    32 min
  4. FEB 16

    The World in Your Hand: The Power of Generative AI

    When it comes to emerging technology, there’s no hotter topic than artificial intelligence. Programs like ChatGPT and Midjourney are becoming more popular and are inspiring people to explore the possibilities of what AI can achieve — including when it comes to accessible technology for people who are blind or visually impaired. One of those people is Saqib Shaikh, an engineering manager at Microsoft. Saqib leads the team that developed an app called Seeing AI, which utilizes the latest generation of artificial intelligence, known as generative AI Dr. Cal spoke with Saqib about how Generative AI works, his firsthand experience using an app like Seeing AI, and how it helped improve his daily life. This episode also features Alice Massa, an occupational therapist at Lighthouse Guild. Alice described the many benefits of generative AI, and how it helps her clients better engage in their world. Saqib and Alice also both agreed that the current state of AI is only the beginning of its potential. They shared their visions of what it could achieve in the future — and it doesn’t seem that far off.   The Big Takeaways: The Power of Generative AI: Saqib discussed the present condition of artificial intelligence and why generative AI is a massive leap from what came before it. With a deep data pool to draw from, generative AI can do so much more than identify items or come up with an essay prompt. It can understand and interpret the world with startling depth and expediency. Seeing AI: This app can truly put the world in the palm of your hand. It can perform essential tasks like reading a prescription or the sign at a bus stop — and even more than that! It can describe all the colorful details of sea life in a fish tank at the aquarium or help you order dinner off a menu. The app doesn’t just provide people who are blind or visually impaired greater access to the world — it expands it. Embrace Change: There’s understandably a lot of uncertainty about what role AI should play in society. However, Saqib Shaikh and Alice Massa insist that there’s nothing to fear from AI, that the benefits far outweigh any potential drawbacks, and that as long as it’s handled responsibly, there’s a lot AI can do to help improve our lives.   Tweetables: “I had a client at the Lighthouse who really was very disinterested in doing anything. The only thing he did on his phone was answer a call from his pastor and call his pastor. And I was able to put Seeing AI on his phone. And his wife said the first time in two years, she saw a smile on his face because now he could read his Bible by himself.” — Alice Massa, Occupational Therapist at Lighthouse Guild “What if AI could understand you as a human? What are your capabilities? What are your limitations at any moment in time? Whether that's due to a disability or your preferences or something else, and understand the environment, the world you're in, or the task you’re doing on the computer or whatever. And then we can use the AI to close that gap and enable everyone to do more and realize their full potential.” — Saqib Shaikh, Engineering Manager at Microsoft “I call my phone my sister because my phone is the person I go to when I’m on the street if I’m walking in Manhattan. The other day I was meeting someone on 47th Street. I wasn’t sure which block I was on. All I did was open Seeing AI short text, hold it up to the street sign, and it told me I was on West 46th Street.” — Alice Massa “Some of the interesting things powered by generative AI is going from taking a photo, say from your photo gallery if you’re reliving memories from your vacation, or even just what’s in front of you right now. It can go from saying it’s a man sitting on a chair in a room to actually giving you maybe a whole paragraph describing what’s in the room, what’s on the shelf, what’s In the background, what’s through the window, even. And it’s just re

    27 min
  5. 12/08/2023

    Reimagining the Visual Arts

    This podcast is about big ideas on how technology is making life better for people with vision loss. When it comes to art, a common phrase is “look, don’t touch.” Many think of art as a purely visual medium, and that can make it difficult for people who are blind or visually impaired to engage with it. But in recent years, people have begun to reimagine what it means to experience and express art. For this episode, Dr. Cal spoke to El-Deane Naude from Sony Electronics. El-Deane discussed the Retissa NeoViewer, a project developed with QD Laser that projects images taken on a camera directly onto the photographer’s retina. This technology allows people who are visually impaired to see their work much more clearly and with greater ease. Dr. Cal also spoke with Bonnie Collura, a sculptor and professor at Penn State University about her project, “Together, Tacit.” Bonnie and her team developed a haptic glove that allows artists who are blind or visually impaired to sculpt with virtual clay. They work in conjunction with a sighted partner wearing a VR headset, allowing both to engage with each other and gain a new understanding of the artistic process. This episode also includes an interview with Greta Sturm, who works for the State Tactile Omero Museum in Italy. Greta described how the museum’s founders created an experience solely centered around interacting with art through touch. Not only is it accessible for people who are blind or visually impaired, but it allows everyone to engage with the museum’s collection in a fascinating new way. Finally, a painter and makeup artist named Emily Metauten described how useful accessible technology has been for her career. But she also discussed the challenges artists who are blind or visually impaired face when it comes to gaining access to this valuable technology.   The Big Takeaways: The Value of Versatility: Many photographers who are visually impaired require the use of large, unwieldy accessories in order to properly capture their work. Sony and QD Laser are determined to solve this problem with the Retissa NeoViewer, which can replace cumbersome accessories like screen magnifiers and optical scopes. Sculpting Virtual Clay: The aim of Together, Tacit, is to “foster creative collaboration between blind, low-vision, and sighted individuals.” A major way this is accomplished is by using the haptic glove to sculpt virtual, rather than physical, clay. Working in VR makes it harder for the sighted partner to unintentionally influence the work of the artist who is blind or visually impaired. As a result, the experience for both users is more authentic and enriching. Reimagining the Museum Experience: The Tactile Omero Museum is much more than an opportunity for people who are blind or visually impaired to interact with art – it’s reimagining how that art is fundamentally experienced. By giving visitors a chance to engage with pieces on a tactile level, the museum allows everyone a chance to reconnect with a vital sense that many take for granted. Expanding Ability to Access Technology: For artists like Emily Metauten who are visually impaired, accessible technology makes it much easier to do their jobs. However, many governmental organizations don’t have the infrastructure to provide this technology to them. Emily wants to raise awareness of how valuable this technology can be, and why providing it to people is so important.   Tweetables: “When we’re little kids, we want to touch everything … and then soon after that, we’re told, no, no, no, you shouldn’t touch. You should look and not touch. And so, it becomes the reality and it becomes what you’re supposed to do.” – Greta Sturm, Operator at State Tactile Omero Museum “I carry a Monocular little optical scope. But it becomes extremely difficult when you’re out and about and you’re trying to take a photograph, trying to change your settings. This method, the laser projection

    37 min
  6. 10/10/2023

    Developing Big Ideas: Product Testing and Iteration

    This podcast is about big ideas on how technology is making life better for people with vision loss. When we buy a product off the shelf, we rarely think about how much work went into getting it there. Between initial conception and going to market, life-changing technology requires a rigorous testing and development process. That is especially true when it comes to accessible technology for people who are blind or visually impaired. For this episode, Dr. Cal spoke to Jay Cormier, the President and CEO of Eyedaptic, a company that specializes in vision-enhancement technology. Their flagship product, the EYE5, provides immense benefits to people with Age-Related Macular Degeneration, Diabetic Retinopathy, and other low-vision diseases. But this product didn’t arrive by magic. It took years of planning, testing, and internal development to bring this technology to market. This episode also features JR Rizzo, who is a professor and researcher of medicine and engineering at NYU — and a medical doctor. JR and his research team are developing a wearable “backpack” navigation system that uses sophisticated camera, computer, and sensor technology. JR discussed both the practical and technological challenges of creating such a sophisticated project, along with the importance of beta testing and feedback.   The Big Takeaways: The importance of testing: There’s no straight line between the initial idea and the final product. It’s more of a wheel, that rolls along with the power of testing and feedback. It’s extremely important to have a wide range of beta testers engage with the product. Their experience with it can highlight unexpected blind spots and create opportunities to make something even greater than originally anticipated. Anticipating needs: When it comes to products like the EYE5, developers need to anticipate that its users will have evolving needs as their visual acuity deteriorates. So part of the development process involves anticipating what those needs will be and finding a way to deliver new features as users need them. Changing on the fly: Sometimes, we receive feedback we were never expecting. When JR Rizzo received some surprise reactions to his backpack device, he had to reconsider his approach and re-examine his fundamental design. Future-Casting: When Jay Cormier and his team at Eyedaptic first started designing the EYE5 device, they were already considering what the product would look like in the future, and how it would evolve. To that end, they submitted certain patents many years ahead of when they thought they’d need them — and now, they’re finally being put to use.   Tweetables: “I’m no Steve Jobs and I don’t know better than our users. So the best thing to do is give them a choice and see what happens.” — Jay Cormier, President & CEO of Eyedaptic “I started to think a little bit more about … assistive technologies. … And, I thought about trying to build in and integrate other sensory inputs that we may not have natively … to augment our existing capabilities.” — JR Rizzo, NYU Professor of Medicine and Engineering “I think the way we’ve always looked at it is the right way, which is you put the user, the end user, front and center, and they’re really your guide, if you will. And we’ve always done that even in the beginning when we start development of a project.” – Jay Cormier “When we put a 10-pound backpack on some colleagues, they offered some fairly critical feedback that it was way too heavy and they would never wear it. … They were like … it’s a non-starter.” — JR Rizzo   Contact Us: Contact us at podcasts@lighthouseguild.org with your innovative new technology ideas for people with vision loss.   Pertinent Links Lighthouse Guild Eyedaptic Rizzo Lab

    38 min
  7. 07/28/2023

    Robotic Guidance Technology

    This podcast is about big ideas on how technology is making life better for people with vision loss. The white cane and guide dogs are long-established foundational tools used by people with vision impairment to navigate. Although it would be difficult to replace the 35,000 years of bonding between humans and dogs, researchers are working on robotic technologies that can replicate many of the same functions of a guide dog. One such project, called LYSA, is being developed by Vix Labs in Brazil. LYSA sits on two wheels and is pushed by the user. It’s capable of identifying obstacles and guiding users to saved destinations. And while hurdles such as outdoor navigation remain, LYSA could someday be a promising alternative for people who either don’t have access to guide dogs or aren’t interested in having one. In a similar vein, Dr. Cang Ye and his team at Virginia Commonwealth University are developing a robotic white cane that augments the familiar white cane experience for people with vision loss. Like the LYSA, the robotic white cane has a sophisticated computer learning system that allows it to identify obstacles and help the user navigate around them, using a roller tip at its base. Although it faces obstacles as well, the robotic guide cane is another incredible example of how robotics can help improve the lives of people who are blind or visually impaired. It may be a while until these technologies are widely available, and guide dogs and traditional canes will always be extremely useful for people who are blind or visually impaired. But with how fast innovations in robotics are happening, it may not be long until viable robotic alternatives are available.   The Big Takeaways: Reliability of Biological Guide Dogs: Although guide dogs have only been around for a little over a century, humans and dogs have a relationship dating over 35,000 years. Thomas Panek, the President and CEO of Guiding Eyes for the Blind, points out that there will never be a true replacement for this timeless bond. That being said, he thinks there is a role for robotics to coexist alongside biological guide dogs, and even help augment their abilities. LYSA the Robotic Guide Dog: LYSA may look more like a rolling suitcase than a dog, but its developers at Brazil’s Vix Systems are working on giving it many of the same functions as its biological counterpart. LYSA can identify obstacles and guide its user around them. And for indoor environments that are fully mapped out, it can bring the user to pre-selected destinations as well. The Robotic White Cane: Dr. Cang Ye and his team at Virginia Commonwealth University are developing a Robotic White Cane that can provide more specific guidance than the traditional version. With a sophisticated camera combined with LiDAR technology, it can help its user navigate the world with increased confidence. Challenges of Outdoor Navigation: Both LYSA and the Robotic White Cane are currently better suited for indoor navigation. A major reason for that is the unpredictability of an outdoor environment along with more fast-moving objects, such as cars on the road. Researchers are working hard on overcoming this hurdle, but it still poses a major challenge. The Speed of Innovation: When Dr. Ye began developing the Robotic White Cane a decade ago, the camera his team used cost $500,000 and had image issues. Now, their technology can be run on a smartphone – making the technology much more affordable, and hopefully one day, more accessible if it becomes available to the public.   Tweetables: “We’ve had a relationship with dogs for 35,000 years. And a relationship with robots for maybe, you know, 50 years. So the ability of a robot to take over that task is a way off. But technology is moving quickly.” — Thomas Panek, President and CEO of Guiding Eyes for the Blind “Outdoor navigation is a whole new world because if you go on the streets, it could be dangerous. You have to be ver

    33 min
5
out of 5
33 Ratings

About

Dr. Cal Roberts, President and CEO of Lighthouse Guild, the leading provider of exceptional services that inspire people who are visually impaired to attain their goals, interviews inventors, developers and entrepreneurs who have innovative tech ideas and solutions to help improve the lives of people with vision loss.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada