O'Reilly Radar Podcast - O'Reilly Media Podcast

O'Reilly Media

Insight, analysis, and research about emerging technologies from O'Reilly Media.

  1. 03/23/2017

    Aman Naimat on building a knowlege graph of the entire business world

    The O'Reilly Radar Podcast: The maturity of AI in enterprise, bridging the AI gaps, and what the U.S. can do with $4 trillion. This week, I sit down with Aman Naimat, senior vice president of technology at Demandbase, and co-founder and CTO of Spiderbook. We talk about his project to build a knowledge graph of the entire business world using natural language processing and deep learning. We also talk about the role AI is playing in those companies today and what’s going to drive AI adoption in the future. Here are a few highlights: Surveying AI adoption We were studying businesses for the purpose of helping sales people talk accounts, and we realized we could use our technology to study entire markets. So, we decided to study entire markets of how companies are adopting AI or big data. Really, the way it works is, we built a knowledge graph of how businesses interact with each other, their behavioral signals, who's doing business with whom, who are their partners, customers, suppliers? Who are the influencers, the decision-makers? Who's buying what product? In essence we have built a universal database, if I may, or a knowledge graph, of the entire business world. We use natural language processing and deep learning—the short answer for what data sets we look at is everything. We are now reading the entire business internet, completely unstructured data, from SCC filings to financial regulatory filings to Tweets to every blog post, every job post, every conference visit, every power point, every video. So, it's really pretty comprehensive. We also have a lot of proprietary data around the business world, as to who's reading or viewing what ad, and we triangulate all of that in this graph and do machine learning on top to classify maturity levels of each company out of the 500,000 into how mature they are in AI. How many people do they have working, what are they doing with it, what are the use cases, how much money are they spending. That's how we built the study. Bridging the AI gap between academia and enterprise What will drive adoption in AI, I think, is also investment. The current landscape, according to our study, which was the first data-driven study of the market, was that only a few companies are really investing in it. There's some interest in other places, but companies like Google—the CEO recently came out and said that AI is really how the company will be framed going forward. So, we need more investments, more venture capital investments, more government investments, and that's not just in starting startups, but putting together data sets that data scientists could consume. Public data sets is a huge gap in the market between what is available in academia and what companies like us at Demandbase have—we have a ton of data, proprietary data. So, to be able to have such data available in open source...that could spark new types of use cases. Can we build an AI-based representative democracy? Another use case: the largest set of spend in the world is actually the United States government—$4 trillion; it's a huge market. So, how do you allocate those resources? Is it possible that we can build systems that, in essence, become some sort of an AI-based representative democracy where we can optimize the preferences of individual citizens? Today, most citizens are completely unaware of what's happening at their local government level or state level. If I ask you who's your state senator, you probably don't know. Nobody actually does, yet the state level pretty much has the biggest impact on our lives. They control education, roads, environment, and they have some of the largest budgets— health care. There's suddenly areas where we can try to understand individual preferences automatically, and there's a lot of data—for each bill that is passed, there are thousands and thousands of pages of feedback, text, that AI can process and understand. So, obviously some of this is really far out, but that doesn't mean we can't do something today.

    19 min
  2. 03/09/2017

    AI adoption at the atomic level of jobs and work

    O'Reilly Radar Podcast: David Beyer on AI adoption challenges, the complexities of getting an AI ROI, and the dangers of hype. This week, I sit down with David Beyer, an investor with Amplify Partners. We talk about machine learning and artificial intelligence, the challenges he’s seeing in AI adoption, and what he thinks is missing from the AI conversation. Here are a few highlights: Complexities of AI adoption AI adoption is actually a multifaceted question. It's something that touches on policy at the government level. It touches on labor markets and questions around equity and fairness. It touches on broad commercial questions around industries and how they evolve over time. There's many, many ways to address this. I think a good way to think about AI adoption at the broader, more abstract level of sectors or categories is to actually zoom down a bit and look at what it is actually replacing. The way to do that is to think at the atomic level of jobs and work. What is work? People have been talking about questions of productivity and efficiency for quite some time, but a good way to think of it from the lens of the computer or machine learning is to divide work into four categories. It's a two-by-two matrix of cognitive and manual, cognitive versus manual work, and routine versus non-routine work. The 90s internet and computer revolution, for the most part, tackled the routine work—Spreadsheets and word processing, things that could be specified by an explicit set of instructions. The more interesting stuff that's happening now, and that should be happening over the next decade, is how does software start to impact non-routine, both cognitive and manual, work? Cognitive work is tricky. It can be divided into two categories: things that are analytical (so, math and science and the like) and things that are more interpersonal and social—sales, being a good example. Then with non-routine work, the first instinct is to think about whether the job seems simple to us as people—so, cleaning a room for us, at first blush, seems like something pretty much anyone who's able could do; it's actually incredibly difficult. There's this bizarre, unexpected result that the hard problems are easier to automate, things like logic. The easier problems are incredibly hard to automate—things that require visuospatial orientation, navigating complex and potentially changing terrain. Things that we have basically been programmed over millennia in our brains to accomplish are actually very difficult to do from the perspective of coding a set of instructions into a computer. AI ROI The question I have in my mind is: in the 90s and 2000s, was simply applying computers to business and communication its own revolution? Does machine learning and AI constitute a new category or is machine learning the final complement to extract the productivity out of that initial Silicon revolution, so to speak? There's this economic historian Paul David, out of Oxford, who wrote an interesting thing looking at American factories and how they adapted to electrification because, previously, a lot of them were steam powered. The initial adoption was really with a lack of imagination: they used motors where steam used to be and hadn't really redesigned anything. They didn't really get much of any productivity. It was only when that crop of old managers was replaced with new managers that people fully redesigned the factory to what we now recognize as the modern factory. The question is the technology itself: from our perspective as investors, it's insufficient. You need business process and workplace rethinking. An area of research, as it relates to this model of AI adoption, is how reconstructible is it—is there an index to describe how particular industries or particular workflows or businesses can be remodeled to use machine learning with more leverage? I think that speaks to how those managers in those instances are going to look at ROI. If the payback period for a particular investment is uncertain or really long, we're less likely to adopt it, which is why you're seeing a lot of pickup of robots in factories. You can specify and drive the ROI; the payback period for that is coming down because it's incredibly clear, well-defined. Another industry is, for example, using machine learning in a legal setting for a law firm. There are parts of it—for example, technology assisted review—where the ROI's pretty clear. You can measure it in time saved. Other technologies that help assist in prediction or judgment for, say, higher-level thinking, the return on that is pretty unclear. A lot of the interesting technologies coming out these days—from, in particular, deep learning—enable things that operate at a higher level than we're used to. At the same time, though, they're building products around that that do relatively high-level things that are hard to quantify. The productivity gains from that are not necessarily clear. The dangers of AI hype One thing I'd say, rather than missing from the AI conversation, is something that there's too much of: I think hype is one of them. Too many businesses now are pitching AI almost as though it's batteries included. That's dangerous because it's going to potentially lead to over-investment in things that overpromise. Then, when they under-deliver, it has a deflationary effect on people's attitudes toward the space. It almost belittles the problem itself. Not everything requires the latest whiz-bang technology. In fact, the dirty secret of machine learning—and, in a way, venture capital—is so many problems could be solved by just applying simple regression analysis. Yet, very few people, very few industries do the bare minimum.

    23 min
  3. 02/23/2017

    Sara Watson on optimizing personalized experiences

    The O'Reilly Radar Podcast: Turning personalization into a two-way conversation. In this week's Radar Podcast, O’Reilly’s Mac Slocum chats with Sara Watson, a technology critic and writer in residence at Digital Asia Hub. Watson is also a research fellow at the Tow Center for Digital Journalism at Columbia and an affiliate with the Berkman Klein Center for Internet and Society at Harvard. They talk about how to optimize personalized experience for consumers, the role of machine learning in this space, and what will drive the evolution of personalized experiences. Here are a few highlights: Accountability across the life cycle of data One of the things I'm paying a lot of attention to is how the machine learning application of this changes what can and can't be explained about personalization. One of the things I'm really looking for as a consumer is to say, "Okay. Why am I seeing this?" That's really interesting to me. I think more and more we're not going to be able to answer that question. Even so, now I think a lot of times we can only provide one piece of the answer as to why I'm seeing this ad, for example. It's really going to get far more complicated, but at the same time, I think there's going to be a lot more need for accountability across that life cycle of data, whether we're talking about following data between the data brokers and the browser history, and my kind of preference model as a consumer. There's got to at least be a little bit more accountability across that pattern. It's obviously going to be a very complicated thing to solve. ...Honestly, I think accountability is going to be demand oriented, whether that is from a policy side or a consumer side. People have started to understand there is something happening in the news feed. It's not just a purely objective timeline. It's not linear. Just that level of knowledge has changed the discussion. That's why we're talking about the objectivity of Facebook's news feed and whether or not you're seeing political news on one side or the other, or the trending topics. Being part of the larger discussion, even if that's not reaching a huge range of consumers, is making consumers more educated toward caring about these things. Empowering the consumer The ideal is not far off. It's just that in practice we're not there yet. I think a lot of people would probably agree that ideal personalization is about relevancy. It's about being meaningful to the consumer and providing something that's valuable. I also think it has to do with being empowering, so not just pushing something onto the consumer, like we know what's best for you or we're anticipating your needs, but really giving them the opportunity to explore what they need and make choices in a smart way. Shaping the conversation One of the things we talk about on the data side of things is 'targeting' people. Think about that word. It's like targeting? Putting a gun to a consumer's head? When you think about it that way, it's like, okay, yeah, this is a one-way conversation. This is not really giving any agency to the person who is part of that conversation. I'm really interested in trying to open up that dialog in a way that's beneficial to all parties involved. ...I think a lot about the language that we use to talk about this stuff. I've written about the metaphors we use to talk about data—with metaphor examples in talking about data lakes, and data's the new oil, and all these kinds of industrial-heavy analogies that really put the focus on the people with the power and the technology and the industry side of things, without necessarily supporting the human side of things. ...It shapes whatever it is you think you're doing, either as a marketer or as the platform that's making those opportunities possible. It's not very sensitive to the subject, really.

    18 min
  4. 02/09/2017

    Tom Davenport on mitigating AI's impact on jobs and business

    The O'Reilly Radar Podcast: The value humans bring to AI, guaranteed job programs, and the lack of AI productivity. This week, I sit down with Tom Davenport. Davenport is a professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a fellow at the MIT Center for Digital Business, and a senior advisor for Deloitte Analytics. He also pioneered the concept of “competing on analytics.” We talk about how his ideas have evolved since writing the seminal work on that topic, Competing on Analytics: The New Science of Winning; his new book Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, which looks at how AI is impacting businesses; and we talk more broadly about how AI is impacting society and what we need to do to keep ourselves on a utopian path. Here are some highlights: How AI will impact jobs In terms of AI impact, there are various schools of thought. Tim O'Reilly's in the very optimistic school. There are other people in the very pessimistic school, thinking that all jobs are going to go away, or 47% of jobs are going to go away, or we'll have rioting in the streets, or our robot overlords will kill us all. I'm kind of in the middle, in the sense that I do think it's not going to be an easy transition for individuals and businesses, and I think we should certainly not be complacent about it and assume the jobs will always be there. But I think it's going to take a lot longer than people usually think to create new business processes and new business models and so on, and that will mean that the jobs will largely continue for long periods. One of my favorite examples is bank tellers. We had about half a million bank tellers in the U.S. in 1980. Along come ATMs and online banking, and so on. You'd think a lot of those tasks would be replaced. We have about half a million bank tellers in the United States in 2016, so... Nobody would recommend it as a growth career, and it is slowly starting to decline, but I think we'll see that in a lot of different areas. And then I think there will be a lot of good jobs working alongside these machines, and that's really the primary focus of our book [Only Humans Need Apply: Winners and Losers in the Age of Smart Machines] was identifying five ways that humans can add value to the work of smart machines. The appeal of augmentation Think about what is it that humans bring to the party. Automation, in a way, is a kind of a downward spiral. If everybody's automating something in an industry, the prices decline, and margins decline, and innovation is harder because you’ve programmed this system to do things a certain way. So, as a starting assumption, I think augmentation is a much more appealing one for a lot of organizations than, ‘We're going to automate all the jobs away.’ Guaranteed job programs If I were a leader in the United States, I would say the people who are going to need the most help are not so much the knowledge workers who are kind of used to learning new stuff and transforming themselves, to some degree, but the long-distance truck drivers. We have three million in the United States, and I think you'll probably see autonomous trucks on the interstate, maybe in special lanes or something, before we see it in most city, before we see autonomous cars in most cities. That's going to be tougher, because truck drivers probably, as a class, are not that comfortable in transforming themselves by taking courses here and there, and learning the skills they need to learn. So in that case, maybe we will need some guaranteed income programs—or, I'd actually prefer to see guaranteed job programs. There's some evidence that if you have a guaranteed income, you think, ‘Well, maybe they'll take up new sports or artistic pursuits,’ or whatever. Turns out, what most people do when they have a guaranteed income is, they sleep more and they watch TV more, so kind of not good for society in general. Guaranteed job programs worked in the Great Depression for the Civilian Conservation Corps, and artists and writers and so on, so we could do something like that. Whether this country would ever do it is not so clear. The (lacking) economic value of AI In a way, what’s missing in the AI conversation is the same thing I saw missing when I started working in analytics: it's a very technical conversation, for the most part. Not that much yet on how it will change key business and organizational processes—how do we get some productivity out of it? I mean, we desperately need more productivity in this country. We haven't increased it much over the past several years—a great example is health care. We have systems that can read radiological images and say, ‘You need a biopsy, because this looks suspicious,’ in a prostate cancer or breast cancer image, or, ‘This pathology image doesn't look good. You need a further biopsy or something, a more detailed investigation,’ but we haven't really reduced the number of radiologists or pathologists at all, so what's the economic value? We've had these for more than a decade. What's the economic value if we're not creating any more productivity? I think the business and social and political change is going to be a lot harder for us to address than the technical change, and I don't think we're really focusing much on that. I mean, there's no discussion of it in politics, and not yet enough in the business context, either.

    17 min
  5. 01/26/2017

    Genevieve Bell on moving from human-computer interactions to human-computer relationships

    The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation. This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation. Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.

    23 min
  6. 01/12/2017

    Pagan Kennedy on how people find, invent, and see opportunities nobody else sees

    The O'Reilly Radar Podcast: The art and science of fostering serendipity skills. On this week's episode of the Radar Podcast, O'Reilly's Mac Slocum chats with award-winning author Pagan Kennedy about the art and science of serendipity—how people find, invent, and see opportunities nobody else sees, and why serendipity is actually a skill rather than just dumb luck. Here are some highlights: The roots of serendipity It's really helpful to go back to the original definition of serendipity, which arose in a very whimsical, serendipitous way back in the 1700s. There was this English eccentric named Horace Walpole who was fascinated with a fairy tale called 'The Three Princes of Serendip.' In this fairy tale, the three princes are Sherlock Homes-like detectives who have amazing skills, forensic skills. They can see clues that nobody else can see. Walpole was thinking about this and very delighted with this idea, so he came up with this word 'serendipity.' In that original definition, Walpole really was talking about a skill, the ability to find what we're not looking for, especially really useful clues that lead to discoveries. In the intervening couple hundred years, the word has almost migrated to the opposite meaning, where we just talk about dumb luck. ... I'm not against that meaning, but I think it's really useful to go back, especially in the age of big data, to go back to that original meaning and talk again about this as a skill. The interplay between technology, the human mind, and serendipity There's a really interesting interplay between tools and the human mind and serendipity. If you look at the history of science, when something like the telescope or the microscope appears, there are waves of discovery because these tools have made things that were formerly invisible visible. When patterns that you couldn't see before become visible, of course, people, smart people, creative people, find those patterns and begin working with them. I think the data tools and all the new tools that we've got are amazing because they make patterns visible that we wouldn't have been able to see before; but in the end, they're tools, and you've got to have a human mind at other end of that tool. If the tool throws up a really important anomaly or pattern, you've got to have a human being there who not only sees it, recognizes it, but also gets super excited about it, and defends it and explores it and figures, and gets excited about an opportunity there. Serendipity as a highly emotional process A class of people who tend to be very good at finding, inventing, and seeing opportunities that nobody else sees are surgeons. I'd really like to emphasize that this kind of problem solving or this kind of pattern finding is not just intellectualizing. It can be very emotional. Surgeons, when they have a problem, somebody dies and they stay up at 3 a.m. thinking about what went wrong with their tools. It's that kind of worrying that is often involved in this kind of search for patterns or opportunities nobody else is seeing. It's not just an intellectual process, but a highly emotional one where you're very worried. This kind of process might not be very good for your health, but it's very good for your creativity, that kind of replaying. Not just noticing at the moment what's going wrong or what might be in the environment that nobody else is seeing, but going over it in your head and thinking about alternative realities.

    17 min
  7. 12/29/2016

    Giles Colborne on AI's move from academic curiosity to mainstream tech

    The O'Reilly Radar Podcast: Designing for mainstream AI, natural language interfaces, and the importance of reinventing yourself. This week we're featuring a conversation from earlier this year—O'Reilly's Mary Treseler chats with Giles Colborne, managing director of cxpartners. They talk about the transformative effects of AI on design, designing for natural language interactions, and why designers need to nurture the ability to reinvent themselves. The conditions are ripe for AI to enter the mainstream Mobile is the platform people want to use. ... That means that a lot of businesses are seeing their traffic shift to a channel that actually doesn't work as well, but people would like it to work well. At the same time, mobile devices have become incredibly powerful. Organizations are suddenly finding themselves flooded with data about user behavior. Really interesting data. It's impossible for a person to understand, but if you have a very powerful device in the user's hand, and you have powerful computers than can crunch this data and shift it around quickly, suddenly, technologies like AI become really important, and you can start to predict what the user might want. Therefore, you can remove a little bit of the friction from mobile. Looking around at this landscape a couple years ago, it's obvious that is going to be where something interesting happens soon. Sure enough, you can see that everywhere now. The interest in AI is phenomenal. At its simplest, the crudest application of AI is simply that: to shortcut user input. That's a very simple application, but it's incredibly powerful. It has a transformative effect. That's why I think AI is really important, is why I think its time is now. That's why I think you're starting to see it everywhere. The conditions are ripe for AI to move from being an academic curiosity into what it is now: mainstream. Designing natural language interfaces One of the things we've been working on a lot recently is designing around chat interfaces, learning natural language interfaces, NLIs. That's a form of algorithms, a really kind of complex form. Essentially, a lot of the features that you find in other forms of AI design are there in designing natural language interfaces. As we've been exploring that space, obviously our instinct is to go back to the psychology of language and really study that so that we're building it in, where we're understanding what we're hearing and trying to model artificial conversations. That's led us very quickly to realize that we need tools that support those sorts of language structures as well. We've been working with a company called Artificial Solutions, that provided us with wonderful tools that enables us to very rapidly model—and almost prototype in the browser—natural language interactions much faster than writing out scripts or running through Post-It notes. You can very quickly see, 'This is where this conversation feels awkward; this is where this conversation is breaking down.' I think that ability to rapidly prototype is incredibly important. Embracing reinvention I think anybody working today needs to be endlessly curious to keep up with the speed with which technology forces us to reinvent ourselves—AI is a great example of that; there's going to be an awful lot of roles that are going to need to be reinvented as AI support tools become mainstream. That ability to be curious and to reinvent yourself is really important. The ability to see things from multiple points of view simultaneously is important as well. We've hired some great people from media backgrounds, and they very naturally have that ability to shift between the actor, if you like—which in our case is the interactive thing that we're designing—the audience, and the author, and are able to think about each of those viewpoints. As you're learning through a design process, you need to be able to hold each of those viewpoints in your head simultaneously. That's really important.

    31 min
  8. 12/15/2016

    Brad Knox on creating a strong illusion of life

    The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning. In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning. Here are some links to things we talked about and some highlights from our conversation: Links: bots_alive Bot Party Knox's article: Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance Knox's article: Power to the People: The Role of Humans in Interactive Machine Learning bots_alive's NSF award: Design, deployment, and algorithmic optimization of zoomorphic, interactive robot companions Creating a strong illusion of life I've been working on a startup company, Emoters. We're releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. .... If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people. The way we're creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master's student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, 'Well, in this situation, the character should do this.' For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away. That results in some fairly interesting characters, but our hypothesis is that we'll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn't have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we're in a really good position. … It's hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we'll be able to learn that from human demonstration and really imbue these robots with some magic. A better model for tugging at emotions The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it's a robot or something like a chat bot, just something you're interacting with— if it can pass as human and it doesn't give some signal or flag that says, 'Hey, even if I appear human, I'm not actually human,' that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human. For me, one real issue is that, as much as I'm generally a believer in capitalism, I think there's room for abuse by commercial companies. For instance, it's hard enough when you're walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it's a person and you don't want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point. ... How is that not a contradiction [to our company's mission to create a strong illusion of life]? The way I see illusion of life (and the way we're doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it's fake. You know that it's a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you're like most people, you feel and experience these characters in the cartoon or the animation. ... I think that's a better model, where we know it's not real but we can still feel that it's real to the extent that we want to. Then, we have a way of turning it off and we're not completely emotionally beholden to these entities. Problematic assumptions of human-taught reinforcement learning I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do? If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn't have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning. One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, 'negative rewards.' We found that people were biased toward positive rewards. The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call 'episodic'—roughly, what that means is that when the task is completed, the agent can't get further reward. Its life is essentially over, but not in a negative way. When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it's in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that's exactly what these rewards are supposed to be teaching it. In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you're letting a human give the reward.

    1 hr
3.3
out of 5
35 Ratings

About

Insight, analysis, and research about emerging technologies from O'Reilly Media.

More From O'Reilly Media