162 episodes

EPAM Continuum's award-winning podcasts feature interviews with people practicing innovation in various forms, digging into their ability to deliver results. Repeatedly.

The EPAM Continuum Podcast Network EPAM Continuum

    • Business
    • 5.0 • 1 Rating

EPAM Continuum's award-winning podcasts feature interviews with people practicing innovation in various forms, digging into their ability to deliver results. Repeatedly.

    The Resonance Test 91: Open Source with Christopher Spalding, Rachel Fadlon, and Chris Howard

    The Resonance Test 91: Open Source with Christopher Spalding, Rachel Fadlon, and Chris Howard

    “Open source” is, of course, a technology term. But, as it turns out, when you connect tech-minded people with those who don’t necessarily think of themselves as IT nerds, something magical can happen. In this case, what works in the digital world—transparency, community, collaboration—has a funny way of spilling over into the analog world. Because, well, people are people. We’re wired to connect.

    In today’s episode of *The Resonance Test,* EPAM’s open source sage Chris Howard chats up two open source experts from EBSCO Information Services: Christopher Spalding, Vice President of Product, and Rachel Fadlon, Vice President of SaaS Marketing and National Conferences & Events. EBSCO is a founding member of Folio, an open source library services platform (LSP), to which EPAM contributes.

    The Open Source Initiative (OSI) maintains a precise definition for the term, but in broad strokes, open source refers to software containing source code that can be edited and used by anyone. We all use it every day without realizing it. Indeed, open source powers the internet as we know it.

    Howard asked Spalding and Fadlon to reflect on what open source has been like at EBSCO, so other companies and industries can learn from an open source project that has achieved scale.

    Folio has allowed developers and librarians to work together in an unprecedented way. Being part of the Folio community, says Fadlon, has dramatically transformed the way EBSCO interacts with customers across the company.

    The relationships that develop organically in an open source community, which are less formal and more “person to person,” says Fadlon, have influenced EBSCO to be more community-oriented in all aspects of the business.

    “The way that you approach someone in the library as a community member *to* a community member is very different than the way we were approaching our customers before,” she says. “We’ve made a lot more things more transparent and open” since joining Folio.

    Spalding says even the language has changed around communications more broadly. “The focus is on, ‘Well, why would that be closed? Let’s make that open. Why wouldn’t we talk about that?’ Let’s put it all on the table because we get feedback instantly, and then we know the direction that we go as a partnership with the larger community.”

    Of course, the trio also talked about security and artificial intelligence, the latter playing out differently in different regions.
    Open source made headlines recently when Linux, one of the most well-known examples of open source, narrowly avoided a cybersecurity disaster thanks to an eagle-eyed engineer. Open source comes with risks, like anything online. Spalding says security concerns might have pushed libraries away from open source a few years ago, but now, increasingly, libraries are adopting the open source adage: “More eyes, fewer bugs. And definitely, more eyes, better security.”

    Howard agrees. “We shouldn’t be afraid of having all of those eyes on us… One of my developers calls it kind of ‘battle testing’ the software, throwing it out to the world and saying, ‘Does this do what you want it to do?’ And if it doesn’t, at least you can tell me … and I can go and fix it or you can even fix it for me if you want to. And I think we’re now finding more and more organizations that actually find that more attractive than scary.”

    Open yourself up to a more flexible, transparent future by listening to this engaging conversation.

    Host/Producer: Lisa Kocian
    Engineer: Kyp Pilalas
    Executive Producer: Ken Gordon

    • 30 min
    The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka

    The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka

    Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds.

    This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person.

    This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”?

    Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?”

    This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers.

    We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.”

    This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI.

    One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.”

    To which Goodis reasonably asks: “I like your idea, but how do you implement that?”

    There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.”
    This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue.

    Host: Alison Kotin
    Engineer: Kyp Pilalas
    Producer: Ken Gordon

    • 32 min
    The Resonance Test 89: Guest Speaker Rowan Curran and Elaina Shekhter on Generative AI

    The Resonance Test 89: Guest Speaker Rowan Curran and Elaina Shekhter on Generative AI

    Can today’s companies afford to be Luddites?

    This is one of the big questions that Elaina Shekhter, EPAM’s Chief Marketing & Strategy Officer and SVP, puts to today’s *Resonance Test* guest, Rowan Curran, Senior Analyst at Forrester.

    In the case of generative AI, both answer: No. Why? Shekhter notes that whatever your competitive edge in 2022, today everyone is encountering a different mode of operations. The positioning around the success or failure of your AI efforts must be “accelerating along the vector of AI, because the opportunity to get away from the competition, faster, is much greater now than it ever has been.”

    In a lively and informed session of back-and-forth, they parse what is real and what is a hallucination in GenAI *at this moment.*

    Curran says that lately there has been an “ebullient explosion” of work on tools and approaches to manage system outputs. “Are we there yet in terms of having these be optimized architectures and things like that? Absolutely not. But is there tons of work being done there or are we approaching reasonable solutions to those problems? Yes, absolutely.”

    What should companies be doing to ensure they're ready to benefit and succeed with AI?

    “Right now, everybody's building the gen one of enterprise generative AI applications,” says Curran, and this will make them ubiquitous. But if your organization fails to adopt them, he adds: “You are going to be falling behind everybody else who is actually building with this stuff today.”

    Listen closely and learn what will the currency of the future be, the commercial and economic models of successful GenAI, the nature of productivity gains: “Somebody saving 30 minutes per day who makes $60K a year is going to have a very different economic impact on the company versus somebody who makes $200K a year and saves 30 minutes per day,” Curran says. They also discuss how this new tech will transform the shape of work and what companies will be focusing on this year: “2023 is the year of excitement and experimentation, and 2024 is the year of optimization and efficiency,” says Curran.

    Oh… and it might also transform the future of fun! “I do think we could use the new technology to make work more fun for people,” says Shekhter, who sees in the soaring advance of multimodal LLMs an opportunity for people “to develop in an enlightened way.”

    Enlighten yourself first. Smash that play button.

    Host: Alison Kotin
    Engineer: Kyp Pilalas
    Producer: Ken Gordon

    • 39 min
    Silo Busting 68: Cloud IR Readiness with Ron Konigsberg, Sam Rehman & Aviv Srour

    Silo Busting 68: Cloud IR Readiness with Ron Konigsberg, Sam Rehman & Aviv Srour

    “There’s been an incident,” is a sentence no one wants to hear… except perhaps people like Ron Konigsberg, Co-Founder and CTO of Gem and our guest on *Silo Busting,* whose business is cloud incident response (IR).

    We know what you’re thinking: What makes cloud IR different from all other forms of IR?

    Let’s let Konigsberg explain: “The challenge is that the cloud is technically simply different.” If you’re using legacy tools, “you're going to protect probably 20% of the cloud.”

    Konigsberg is joined in conversation by Sam Rehman, EPAM’s Chief Information Security Officer and SVP, and the pair are pelted with questions by Aviv Srour, our Head of Cyber Innovation.

    Konigsberg says that incident responders need to “adapt from network and agents to services and APIs, and constantly learn about new services and stay up to date and up to speed” with what the bad guys are picking up.

    Oh, those bad guys! Regarding attackers, Konigsberg says: “They adopt innovation faster than defenders.” They can do so because they have fewer dependencies “and they care less [than defenders do] about breaking things.”

    To illustrate, he asks us to think about migrating to the cloud: Imagine you’re an attacker and you simply never worry about any legacy systems from your previous environments. “They have much more liberty and they move faster.”

    “They adopt techniques about new services that each cloud provider is releasing *tomorrow,*” says Konigsberg.

    So it is, in some ways, about playing catch-up. CISOs have had to adopt a new mindset and posture. “You can only block so many punches until you have to figure out [that] you need to move around, you need to counter, and so on,” says Rehman.

    Rehman adds that CISOs have finally understood the “shared responsibility between you and the cloud provider.” But that’s not the only issue with the cloud. “It's much flatter than what you’re used to on prem,” he says. “Which means a lateral attack is a lot quicker, moving things around a lot easier, and the *simplicity* of people actually moving things around and infecting a large area is substantially higher.”

    So how can an organization properly respond to, and learn to prioritize within, the cloud conundrum? One answer, says Rehman, is culture.

    “We have to adopt a learning culture in security,” he says. “They’re always gonna be one step ahead of us, but at least we're one step behind, not ten.” Pick up the pace of your learning and listen to the experts speak. Hit play!

    Host: Lisa Kocian
    Editor: Kyp Pilalas
    Producer: Ken Gordon

    • 36 min
    The Resonance Test 88: Scott Loughlin, Sam Rehman, and Brian Imholte on Privacy, Education, and AI

    The Resonance Test 88: Scott Loughlin, Sam Rehman, and Brian Imholte on Privacy, Education, and AI

    Sam Rehman—a frequent voice on this podcast network and EPAM’s Chief Information Security Officer and SVP—was in the classroom recently, teaching students, and in the process was “surprised by the density of PII that's in in the system.”

    This led Rehman to realize that “at least here in California,” higher education’s investment in cybersecurity is “substantially behind.”

    Catching up is a theme of today’s conversation about privacy, education, and artificial intelligence.

    Speaking for the (cyber)defense, with Rehman, is today’s guest on *The Resonance Test,* Scott Loughlin, Partner and Global Co-Lead of the Privacy & Cybersecurity Practice at the law firm Hogan Lovells.

    “It took a long time to get people to understand that the easiest thing to do is not always the right thing to do to protect the company’s interest and protect the company’s data,” says Loughlin. “And that is an experience that we'll all have with respect to generative AI tools.”

    Loughlin and Rehman are put through their conversational paces from questions by Brian Imholte, our Head of Education & Learning Services.

    They have much to say about data governance (“Data is not by itself anymore, it's broken up in pieces, combined, massaged, and then pulled out from a model,” says Rehman), data pedigree, the laws—and lack thereof—regarding privacy and generative AI. They also kick around the role that FERPA assumes here. “You’re trying to deploy this old framework against this new technology, which is difficult,” says Loughlin, adding: “There are some key areas of tension that will come up with using generative AI with student data.”

    So where might an educational publisher or school begin?

    “Focus on your value first,” says Rehman. Do your experiments, but do them in small pieces, he says: "And then within those small pieces, know what you're putting into the model.”

    This informative and spirited conversation is even occasionally funny. Loughlin brings up a court case about whether or not a selfie-taking monkey selfie would own the copyright to the photo. “The court said no,” notes Loughlin, adding that US Copyright laws are “designed to protect the authorship of humans, not of monkeys, and in this case not of generative AI tools.”

    Download now: It’s sure to generate some new thoughts.

    Host: Kenji Ross
    Engineer: Kyp Pilalas
    Producer: Ken Gordon

    • 41 min
    Silo Busing 67: Andrew Whaley and Sam Rehman on App Security

    Silo Busing 67: Andrew Whaley and Sam Rehman on App Security

    Going mobile: It’s going to create vulnerabilities. That’s the way things work with apps. They aren’t just friendly pieces of software that help you beat traffic or bring your favorite tunes into your eardrums… they are opportunities, rich ones, for the bad guys.

    Andrew Whaley, the Senior Technical Director (UK) at Promon and our guest on *Silo Busting,* says that with an app, “You have to be able to trust the security model that you've got around it.”

    Whaley talks with Sam Rehman, our Chief Information Security Officer and SVP, about how apps operate on a client-server model, but “all the client code is distributed outside of your enterprise.” Some of these users could well be criminals who, once gaining access to that code, could "reverse engineer it and come up with ways to attack that.”

    And the code in those apps can be a bit suspect. Whaley says that most apps are made up of 80% open-source software. “You know how many of those app developers go and build that source themselves from source and read over it before they compile?” he asks and then answers: “Probably close to zero.”

    Speaking of putting the work in… Rehman talks about calibrating “the level of effort that the attacker would have to go through versus the yield.” The trick is, he says, layering on cybersecurity techniques “so that the yield is not worth it for them.”

    Whaley replies that “once you layer obfuscation on, you then have this impenetrable forest” and that the “immediately accessible ways of attacking [an application] are taken off the table.”

    Together they chat about supply chain attacks, nonlinear programming, and more. Tune in and be safe(r)!

    Host: Glenn Gruber
    Editor: Kyp Pilalas
    Producer: Ken Gordon

    • 25 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Business

Матерь Бложья
Александра Митрошина
Подкаст Соколовского
Александр Соколовский
Либо выйдет, либо нет
libo/libo
Social Media Marketing Podcast
Michael Stelzner, Social Media Examiner
СОБЕС
libo/libo
Шире чек
Ирина Подрез х Богема