Oracle University Podcast

Oracle Corporation

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.

  1. 2 DAYS AGO

    AI Across Industries and the Importance of Responsible AI

    AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it.   This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values.   In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we’ll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we’ll also touch upon ethical and responsible AI.  01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on?  Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let’s get into specific scenarios within each industry.  Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That’s great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let’s take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I’m sure AI has its constraints too, right? Can you talk about what happens if AI isn’t able to echo human ethics?  Hemant: AI can fail due to lack of ethics.  AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us.  Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorith

    19 min
  2. 23 SEPT

    Oracle AI for Fusion Apps

    Want to make AI work for your business? In today’s episode, Lois Houston and Nikita Abraham continue their discussion of AI in Oracle Fusion Applications by focusing on three key AI capabilities: predictive, generative, and agentic.   Joining them is Principal Instructor Yunus Mohammed, who explains how predictive, generative, and agentic AI can optimize efficiency, support decision-making, and automate tasks—all without requiring technical expertise.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we explored the essential components of the Oracle AI stack and spoke about Oracle’s suite of AI services.  Nikita: Yeah, and in today’s episode, we’re going to go down a similar path and take a closer look at the AI functionalities within Oracle Fusion Applications. 00:53 Lois: With us today is Principal Instructor Yunus Mohammed. Hi Yunus! It’s lovely to have you back with us. For anyone who doesn’t already know, what are Oracle Fusion Cloud Applications?  Yunus: Oracle Fusion Applications are a suite of cloud-based enterprise applications designed to run for your business across finance, HR, supply chain, sales, services and more, all on a unified platform. They are designed to help enterprises operate smarter, faster by embedding AI directly into business process. That means better forecasts in finance, faster hiring decisions in HR, and optimized supply chains, and more personalized customer experience.  01:42 Nikita: And we know they’ve been built for today's fast-paced, AI-driven business environment. So, what are the different functional pillars within Oracle Fusion Apps? Yunus: The first one is the ERP, Enterprise Resource Planning, which supports financials, procurements, and project management. It's the backbone of many organizations, or day-to-day operations. HCM or Human Capital Management, handles workforce-related processes such as hiring, payroll, performance, and talent development, helping HR teams operate more efficiently. SCM, the Supply Chain Management, enables businesses to manage their logistics, inventory, and suppliers and manufacturers in the business. It's particularly critical in industries with complex operations like retail and manufacturing. The CX, which is the Customer Experience, covers the full customer life cycle, which includes sales, marketing, and service. These models help the businesses connect with their customers more personally and proactively, whether through the targeted campaigns or responsive support.  03:02 Lois: Yunus, what sets Fusion apart? Yunus: What sets Fusion apart is how these applications work seamlessly together. They share data natively and continuously improve with AI and automation, giving you not just tools, but intelligence at scale.  Oracle applications are built to be AI first, with a complete suite of finance, supply chain, manufacturing, HR, sales, service, and marketing, that is tightly coupled with our industry and data intelligence applications. The easiest and the most effective way to start building your organization’s AI muscle is with AI embedded in Fusion applications. For example, if the customer needs to return a defective product, the service representative simply clicks on Ask Oracle for the answers. Since the AI agent is embedded in the application, it has contextual information about the customer, the order, and any special service, contract, or any other feature that is required for this process. The AI agent automatically figures out the return policy, including the options to send a replacement product immediately or offer a discount for the inconvenience, and also expedite shipping. Another AI agent sends a personalized email confirming details of the return, and different AI agent creates the replacement order for fulfillment and shipping. Our AI-embedded Fusion Applications can automate an end-to-end business process from service request to return order to fulfillment and shipping and then accounting.  These are pre-built and tested so that all the worry and hard work is removed from the implementation point of view. They cover the core workflows. Basically, they address tasks that form part of the organization's core workflow. User requires no technical knowledge in the scenarios.  05:16 Lois: That’s great! So, you don’t need to be an AI expert or a data scientist to get going. Yunus: The outcomes are super fast in business softwares and context is everything. Just having the right information isn't enough. This is about having the information in the right place at the right time for it to be instantly actionable. They are ready from day one and can be optimized over time. They are powerful out of the box and only get better with day-to-day processes and performance. 05:55 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today!  06:20 Nikita: Welcome back! So, when we talk about the AI capabilities in Fusion apps, I know we have different types. Can you tell us more about them?  Yunus: Predictive AI is where it all started. These models analyze historical patterns and data to anticipate what might happen next. For example, predicting employee attrition, forecasting demand in supply chain, or flagging potential late payments in finance workflows. These are embedded into business processes to surface insights before action is needed. Then we have got the generative AI, which takes this a step more further. Instead of just providing insights, it creates content, such as auto-generating job descriptions, summarizing performance reviews, or even crafting draft responses to supplier queries. This saves time and boosts productivity across functions like HR, CX, and procurement. Last but not the least, we have got the agentic AI, which is the most advanced layer. These agents don't just provide suggestions, they take actions on behalf of the users. Think of an agent that not only recommends actions in a workflow, but also executes them, creating tasks, filling tickets, updating systems, and communicating with stakeholders, all autonomously but under user control. And importantly, many business scenarios today benefit from a blend of these types. For example, an AI assistant in Fusion HCM might predict employees turnover, which is predictive AI, generates tailored retention plans, which is generative, and it is generative AI, and initiate outreach or next steps, which is done by the process of agents, which is called agentic AI. So, Oracle integrates these capabilities in a harmonious way, enabling users to act faster, personalize at scale, and drive better business outcomes.  08:39 Lois: Ok, let’s get into the specifics. How does Oracle use predictive AI across its Fusion apps, helping businesses anticipate what’s coming and act proactively.  Yunus: So in HCM, things like recommended jobs, in this, candidates visiting a potential employer’s website encountered an improved online experience, whereby if they have uploaded their resumes, they will be shown job opportunities that match their skills and experience mix. This helps candidates who are unsure what to search by showing them roles and titles they may not have considered. Time to hire provides an estimated as to how long it will take for an HR team to fill an open role, but this is really useful not only in terms of planning, recruitment, but also in terms of understanding whether you might need some temporary cover and for how long will it actually take the process to complete. In the process of supply chain management, the predictive AI is leveraged to revolutionize transit time and estimated time of arrival, which is called as the predictive analysis, enhancing efficiency, and optimizing operations. It can flag abnormal patterns in supply or inventory. For example, if a batch of parts is behaving differently in the production line and predict future demands, helping avoid overstocking or stockouts is a process that can be done by using the SCM predictive analysis or predictive AI. In ERPs, where you can audit your expenses, plan for future expenses, and do dynamic discounting for vendors who are likely to accept earlier payments or earlier payment discounts, it can also speed up reimbursements by automated expense entries. In CX, you have the options to go with adaptive intelligence for sales, which helps representatives prioritize the leads and the likelihood that a specific lead will close, helping representatives focus their time and effort. So predictive scheduling and routing in service delivery ensures that the right resource is assigned to the right customer at the right time, boosting operational efficiency and customer satisfaction, also known as fatigue analysis. 11:23 Lois: Now let’s shift our fo

    18 min
  3. 16 SEPT

    Oracle's AI Ecosystem

    In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to explore Oracle’s approach to enterprise AI. The conversation covers the essential components of the Oracle AI stack and how each part, from the foundational infrastructure to business-specific applications, can be leveraged to support AI-driven initiatives.   They also delve into Oracle’s suite of AI services, including generative AI, language processing, and image recognition.     AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   -------------------------------------------------------------   Episode Transcript:  00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we discussed why the decision to buy or build matters in the world of AI deployment. Lois: That’s right, Niki. Today is all about the Oracle AI stack and how it empowers not just developers and data scientists, but everyday business users as well. Then we’ll spend some time exploring Oracle AI services in detail.  01:00 Nikita: Yunus Mohammed, our Principal Instructor, is back with us today. Hi Yunus! Can you talk about the different layers in Oracle’s end-to-end AI approach? Yunus: The first base layer is the foundation of AI infrastructure, the powerful compute and storage layer that enables scalable model training and inferences. Sitting above the infrastructure, we have got the data platform. This is where data is stored, cleaned, and managed. Without a reliable data foundation, AI simply can't perform. So base of AI is the data, and the reliable data gives more support to the AI to perform its job. Then, we have AI and ML services. These provide ready-to-use tools for building, training, and deploying custom machine learning models. Next, to the AI/ML services, we have got generative AI services. This is where Oracle enables advanced language models and agentic AI tools that can generate content, summarize documents, or assist users through chat interfaces. Then, we have the top layer, which is called as the applications, things like Fusion applications or industry specific solutions where AI is embedded directly into business workflows for recommendations, forecasting or customer support. Finally, Oracle integrates with a growing ecosystem of AI partners, allowing organizations to extend and enhance their AI capabilities even further. In short, Oracle doesn't just offer AI as a feature. It delivers it as a full stack capability from infrastructure to the layer of applications. 02:59 Nikita: Ok, I want to get into the core AI services offered by Oracle Cloud Infrastructure. But before we get into the finer details, broadly speaking, how do these services help businesses? Yunus: These services make AI accessible, secure, and scalable, enabling businesses to embed intelligence into workflows, improve efficiency, and reduce human effort in repetitive or data-heavy tasks. And the best part is, Oracle makes it easy to consume these through application interfaces, APIs, software development kits like SDKs, and integration with Fusion Applications. So, you can add AI where it matters without needing a data scientist team to do that work.  03:52 Lois: So, let’s get down to it. The first core service is Oracle's Generative AI service. What can you tell us about it?  Yunus: This is a fully managed service that allows businesses to tap into the power of large language models. You can actually work with these models from scratch to a well-defined develop model. You can use these models for a wide range of use cases like summarizing text, generating content, answering questions, or building AI-powered chat interfaces.  04:27 Lois: So, what will I find on the OCI Generative AI Console? Yunus: OCI Generative AI Console highlights three key components. The first one is the dedicated AI cluster. These are GPU powered environments used to fine tune and host your own custom models. It gives you control and performance at scale. Then, the second point is the custom models. You can take a base language model and fine tune it using your own data, for example, company manuals or HR policies or customer interactions, which are your own personal data. You can use this to create a model that speaks your business language. And last but not the least, the endpoints. These are the interfaces through which your application connect to the model. Once deployed, your app can query the model securely and at different scales, and you don't need to be a developer to get started. Oracle offers a playground, which is a non-core environment where you can try out models, craft parameters, and test responses interactively. So overall, the generative AI service is designed to make enterprise-grade AI accessible and customizable. So, fitting directly into business processes, whether you are building a smart assistant or you're automating the content generation process.  06:00 Lois: The next key service is OCI Generative AI Agents. Can you tell us more about it?  Yunus: OCI Generative AI agents combines a natural language interface with generative AI models and enterprise data stores to answer questions and take actions. The agent remembers the context, uses previous interactions, and retrieves deeper product speech details. They aren't just static chat bots. They are context aware, grounded in business data, and able to handle multi-turns, follow-up queries with relevant accurate responses, and driving productivity and decision-making across departments like sales, support, or operations. 06:54 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 07:37 Nikita: Welcome back! Yunus, let’s move on to the OCI Language service.  Yunus: OCI Language helps business understand and process natural language at scale. It uses pretrained models, which means they are already trained on large industry data sets and are ready to be used right away without requiring AI expertise. It detects over 100 languages, including English, Japanese, Spanish, and more. This is great for global business that receive multilingual inputs from customers. It works with identity sentiments. For different aspects of the sentence, for example, in a review like, “The food was great, but the service sucked,” OCI Language can tell that food has a positive sentiment while service has a negative one. This is called aspect-based sentiment analysis, and it is more insightful than just labeling the entire text as positive or negative. Then we have got to identify key phrases representing important ideas or subjects. So, it helps in extracting these key phrases, words, or terms that capture the core messages. They help automate tagging, summarizing, or even routing of content like support tickets or emails.  In real life, the businesses are using this for customer feedback analysis, support ticket routing, social media monitoring, and even regulatory compliances.  09:21 Nikita: That’s fantastic. And what about the OCI Speech service?  Yunus: The OCI Speech is an AI service that transcribes speech to text. Think of it as an AI-powered transcription engine that listens to the spoken English, whether in audio or video files, and turns it into usable and searchable and readable text. It provides timestamps, so you know exactly when something was said. A valuable feature for reviewing legal discussions, media footages, or compliance audits. OCI Speech even understands different speakers. You don't need to train this from scratch. It is pre-trained model hosted on an API. Just send your audio to the service, and you get an accurate timestamp text back in return. 10:17 Lois: I know we also have a service for object detection… called OCI Vision?  Yunus: OCI Vision uses pretrained, deep learning models to understand and analyze visual content. Just like a human might, you can upload an image or videos, and the AI can tell you what is in it and where they might be useful. There are two primary use cases, which you can use this particular OCI Vision for. One is for object detection. You have got a red color car. So OCI Vision is not just identifying that’s a car. It is detecting and labeling parts of the car too, like the bumper, the wheels, the design components. This is a critical in industries like manufacturing, retail, or logistics. For example, in quality control, OCI Vision can scan product images to detect missing or defective parts automatically.  Then we have got the image classification. This is useful in scenarios like automated tagging of photos, managing digital assets, classifying this particular scene or context of this particular scene. So basically, when we talk about OCI Vision, which is actually a fully managed, no complex model training

    16 min
  4. Buy or Build AI?

    9 SEPT

    Buy or Build AI?

    How do you decide whether to buy a ready-made AI solution or build one from the ground up? The choice is more than just a technical decision; it’s about aligning AI with your business goals.   In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to examine the critical factors influencing the buy vs. build debate. They explore real-world examples where businesses must weigh speed, customization, and long-term strategy. From a startup using a SaaS chatbot to a bank developing a custom fraud detection model, Yunus provides practical insights on when to choose one approach over the other.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we spoke about the key stages in a typical AI workflow and how data quality, feedback loops, and business goals influence AI success. 00:50 Nikita: In today’s episode, we’re going to explore whether you should buy or build AI apps. Joining us again is Principal Instructor Yunus Mohammed. Hi Yunus, let’s jump right in. Why does the decision of buy versus build matter? Yunus: So when we talk about buy versus build matters, we need to consider the strategic business decisions over here. They are related to the strategic decisions which the business makes, and it is evaluated in the decision lens. So the center of the decision lens is the business objective, which identifies what are we trying to solve. Then evaluate our constraints based on that particular business objective like the cost, the time, and the talent. And finally, we can decide whether we need to buy or build. But remember, there is no single correct answer. What's right for one business may not be working for the other one. 01:54 Lois: OK, can you give us examples of both approaches? Yunus: The first example where we have got a startup using a SaaS AI chatbot. Now, being a startup, they have to choose a ready-made solution, which is an AI chatbot. Now, the question is, why did they do this? Because speed and simplicity mattered more than deep customization that is required for the chatbot. So, their main aim was to have it ready in short period of time and make it more simpler. And this actually lead them to get to the market fast with low upfront cost and minimal technical complexities. But in some situations, it might be different. Like, your bank, which needs to build a fraud model. It cannot be outsourced or got from the shelf. So, they build a custom model in-house. With this custom model, they actually have a tighter control, and it is tuned to their standards. And it is created by their experts. So these two generic examples, the chatbot and the fraud model example, helps you in identifying whether I should go for a SaaS product with simple choice of selecting an existing LLM endpoint and not making any changes. Or should I go with model depending on my business and organization requirement and fine tuning that model later to define a better implementation of the scenarios or conditions that I want to do which are specific to my organization. So here you decide with the reference whether I want it to be done faster, or whether I want to be more customized to my organization. So buy it, when it is generic, or build when it is strategic. The SaaS, which is basically software as a service, refers to ready to use cloud-based applications that you access via internet. You can log into the platform and use the built-in AI, there's no setup requirement for those. Real-world examples can be Oracle Fusion apps with AI features enabled. So in-house integration means embedding AI with my own requirements into your own systems, often using custom APIs, data pipelines, and hosting it. It gives you more flexibility but requires a lot of resources and expertise. So real-world example for this scenario can be a logistics heavy company, which is integrating a customer support model into their CX. 04:41 Lois: But what are the pros and cons of each approach? Yunus: So, SaaS and Fusion Applications, basically, they offer fast deployment with little to no coding required, making them ideal for business looking to get started quickly and faster. And they typically come with lower upfront costs and are maintained by vendor, which means updates, security, support are handled externally. However, there are limited customizations and are best suited for common, repeatable use cases. Like, it can be a standard chatbot, or it can be reporting tools, or off the shelf analytics that you want to use. But the in-house or custom integration, you have more control, but it takes longer to build and requires a higher initial investment. The in-house or custom integration approach allows full customization of the features and the workflows, enabling you to design and tailor the AI system to your specific needs. 05:47 Nikita: If you're weighing the choice between buying or building, what are the critical business considerations you'd need to take into account? Yunus: So let's take one of the key business consideration which is time to market. If your goal is to launch fast, maybe you're a startup trying to gain traction quickly, then a prebuilt plug and play AI solution, for example, a chatbot or any other standard analytical tool, might be your best bet. But if you have time and you are aiming for precision, a custom model could be worth the wait. Prebuilt SaaS tools usually have lower upfront costs and a subscription model. It works with putting subscriptions. Custom solutions, on the other hand, may require a bigger investment upfront. In development, you require high talent and infrastructures, but could offer cost savings in the long run. So, ask yourself a question here. Is this AI helping us stand out in the market? If the answer is yes, you may want to build something which is your proprietary. For example, an organization would use a generic recommendation engine. It's a part of their secret sauce. Some use cases require flexibility, like you want to tailor the rules to match your specific risk criteria. So, under that scenarios, you will go for customizing. So, you will go with off the shelf solutions may not give you deep enough requirements that you want to evaluate. So, you get those and you try to customize those. You can go for customization of your AI features. The other important key business consideration is the talent and expertise that your organization have. So, the question that you need to ask in the organization is, do you have an internal team who is well versed in developing AI solutions for you? Or do you have access to one of the teams which can help you build your own proprietary products? If not, you'll go with SaaS. If you do have, then building could unlock greater control over your AI features and AI models. The next core component is your security and data privacy. If you're handling sensitive information, like for example, the health care or finance data, you might not want to send your data to the third-party tools. So in-house models offer better control over data security and compliance. When we leverage a model, it could be a prebuilt or custom model. 08:50 Oracle University is proud to announce three brand new courses that will help your teams unlock the power of Redwood—the next generation design system. Redwood enhances the user experience, boosts efficiency, and ensures consistency across Oracle Fusion Cloud Applications. Whether you're a functional lead, configuration consultant, administrator, developer, or IT support analyst, these courses will introduce you to the Redwood philosophy and its business impact. They’ll also teach you how to use Visual Builder Studio to personalize and extend your Fusion environment. Get started today by visiting mylearn.oracle.com.  09:31 Nikita: Welcome back! So, getting back to what you were saying before the break, what are pre-built and custom models? Yunus: A prebuilt model is an AI solution that has already been trained by someone else, typically a tech provider. It can be used to perform a specific task like recognizing images, translating text, or detecting sentiments. You can think of it like buying a preassembled appliance. You plug it in, configure a few settings, and it's ready to use. You don't need to know how the internal parts work. You benefit from the speed, ease, and reliability of this particular model, which is a prebuilt model. But you can't easily change how it works under the hood. Whereas, a custom model is an AI solution that your organization designs and trains and tunes specifically for their business problems using their own data. You can think of it like designing your own suit. It takes more time and effort to create. It is built to your exact measurements and needs. And you have full control over how it performs and evolves. 10:53 Lois: So, when would you choose a pre-built versus a custom model? Yunus: Depending on speed, simplicity, control, and customization, you can decide on using a prebuilt or to create a custom model. Prebuilt models ar

    16 min
  5. 2 SEPT

    The AI Workflow

    Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI’s real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we’re going to look at the key stages in a typical AI workflow. We’ll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University.  01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model?  Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately.  After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting.  So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results.  04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development?  Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data.  Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches?  Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data?  Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart.  08:23 Lois: So, we’ve established that collecting the right data is non-negotiable for success. Then comes preparing it, right?  Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format.  10:31 Lois: And does each AI system have a different way of preparing data?  Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to sol

    22 min
  6. 26 AUG

    Core AI Concepts – Part 3

    Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services.   Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we’ll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what’s the difference between traditional AI and generative AI?  01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction.  But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you.  Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work.  Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create.   02:16 Lois: How did traditional AI progress to the generative AI we know today?  Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex.  Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc.  Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content.  Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code.  03:55 Nikita: Himanshu, what motivated this sort of progression?   Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models.  And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas.   04:25 Lois: So, what’s happening behind the scenes? How is gen AI making these things happen?  Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs.  They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos.  And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities.  It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes.  05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let’s talk more about them. Himanshu, what is an LLM and how does it work?  Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before.  This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories.  06:06 Nikita: But what’s large about this? Why’s it called a large language model?   Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning.  There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human.  06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work?  Himanshu: Diffusion models start with something that looks like pure random noise.  Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside.  And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict.   07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com.  07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I’m sure the uses go far beyond that, right? Can you walk us through some of them?  Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems.  Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents.  The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts.  The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes.  For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization.  Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output.  Text to image It's being used in e-commerce, movie concept art, and

    23 min
  7. 19 AUG

    Core AI Concepts – Part 2

    In this episode, Lois Houston and Nikita Abraham continue their discussion on AI fundamentals, diving into Data Science with Principal AI/ML Instructor Himanshu Raj. They explore key concepts like data collection, cleaning, and analysis, and talk about how quality data drives impactful insights.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Hi everyone! Last week, we began our exploration of core AI concepts, specifically machine learning and deep learning. I’d really encourage you to go back and listen to the episode if you missed it.   00:52 Lois: Yeah, today we’re continuing that discussion, focusing on data science, with our Principal AI/ML Instructor Himanshu Raj.  Nikita: Hi Himanshu! Thanks for joining us again. So, let’s get cracking! What is data science?  01:06 Himanshu: It's about collecting, organizing, analyzing, and interpreting data to uncover valuable insights that help us make better business decisions. Think of data science as the engine that transforms raw information into strategic action.  You can think of a data scientist as a detective. They gather clues, which is our data. Connect the dots between those clues and ultimately solve mysteries, meaning they find hidden patterns that can drive value.  01:33 Nikita: Ok, and how does this happen exactly?  Himanshu: Just like a detective relies on both instincts and evidence, data science blends domain expertise and analytical techniques. First, we collect raw data. Then we prepare and clean it because messy data leads to messy conclusions. Next, we analyze to find meaningful patterns in that data. And finally, we turn those patterns into actionable insights that businesses can trust.  02:00 Lois: So what you’re saying is, data science is not just about technology; it's about turning information into intelligence that organizations can act on. Can you walk us through the typical steps a data scientist follows in a real-world project?  Himanshu: So it all begins with business understanding. Identifying the real problem we are trying to solve. It's not about collecting data blindly. It's about asking the right business questions first. And once we know the problem, we move to data collection, which is gathering the relevant data from available sources, whether internal or external.  Next one is data cleaning. Probably the least glamorous but one of the most important steps. And this is where we fix missing values, remove errors, and ensure that the data is usable. Then we perform data analysis or what we call exploratory data analysis.  Here we look for patterns, prints, and initial signals hidden inside the data. After that comes the modeling and evaluation, where we apply machine learning or deep learning techniques to predict, classify, or forecast outcomes. Machine learning, deep learning are like specialized equipment in a data science detective's toolkit. Powerful but not the whole investigation.  We also check how good the models are in terms of accuracy, relevance, and business usefulness. Finally, if the model meets expectations, we move to deployment and monitoring, putting the model into real world use and continuously watching how it performs over time.  03:34 Nikita: So, it’s a linear process?  Himanshu: It's not linear. That's because in real world data science projects, the process does not stop after deployment. Once the model is live, business needs may evolve, new data may become available, or unexpected patterns may emerge.  And that's why we come back to business understanding again, defining the questions, the strategy, and sometimes even the goals based on what we have learned. In a way, a good data science project behaves like living in a system which grows, adapts, and improves over time. Continuous improvement keeps it aligned with business value.   Now, think of it like adjusting your GPS while driving. The route you plan initially might change as new traffic data comes in. Similarly, in data science, new information constantly help refine our course. The quality of our data determines the quality of our results.   If the data we feed into our models is messy, inaccurate, or incomplete, the outputs, no matter how sophisticated the technology, will be also unreliable. And this concept is often called garbage in, garbage out. Bad input leads to bad output.  Now, think of it like cooking. Even the world's best Michelin star chef can't create a masterpiece with spoiled or poor-quality ingredients. In the same way, even the most advanced AI models can't perform well if the data they are trained on is flawed.  05:05 Lois: Yeah, that's why high-quality data is not just nice to have, it’s absolutely essential. But Himanshu, what makes data good?   Himanshu: Good data has a few essential qualities. The first one is complete. Make sure we aren't missing any critical field. For example, every customer record must have a phone number and an email. It should be accurate. The data should reflect reality. If a customer's address has changed, it must be updated, not outdated. Third, it should be consistent. Similar data must follow the same format. Imagine if the dates are written differently, like 2024/04/28 versus April 28, 2024. We must standardize them.   Fourth one. Good data should be relevant. We collect only the data that actually helps solve our business question, not unnecessary noise. And last one, it should be timely. So data should be up to date. Using last year's purchase data for a real time recommendation engine wouldn't be helpful.  06:13 Nikita: Ok, so ideally, we should use good data. But that’s a bit difficult in reality, right? Because what comes to us is often pretty messy. So, how do we convert bad data into good data? I’m sure there are processes we use to do this.  Himanshu: First one is cleaning. So this is about correcting simple mistakes, like fixing typos in city names or standardizing dates.  The second one is imputation. So if some values are missing, we fill them intelligently, for instance, using the average income for a missing salary field. Third one is filtering. In this, we remove irrelevant or noisy records, like discarding fake email signups from marketing data. The fourth one is enriching. We can even enhance our data by adding trusted external sources, like appending credit scores from a verified bureau.  And the last one is transformation. Here, we finally reshape data formats to be consistent, for example, converting all units to the same currency. So even messy data can become usable, but it takes deliberate effort, structured process, and attention to quality at every step.  07:26 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 08:10 Nikita: Welcome back! Himanshu, we spoke about how to clean data. Now, once we get high-quality data, how do we analyze it?  Himanshu: In data science, there are four primary types of analysis we typically apply depending on the business goal we are trying to achieve.  The first one is descriptive analysis. It helps summarize and report what has happened. So often using averages, totals, or percentages. For example, retailers use descriptive analysis to understand things like what was the average customer spend last quarter? How did store foot traffic trend across months?  The second one is diagnostic analysis. Diagnostic analysis digs deeper into why something happened. For example, hospitals use this type of analysis to find out, for example, why a certain department has higher patient readmission rates. Was it due to staffing, post-treatment care, or patient demographics?  The third one is predictive analysis. Predictive analysis looks forward, trying to forecast future outcomes based on historical patterns. For example, energy companies predict future electricity demand, so they can better manage resources and avoid shortages. And the last one is prescriptive analysis. So it does not just predict. It recommends specific actions to take.  So logistics and supply chain companies use prescriptive analytics to suggest the most efficient delivery routes or warehouse stocking strategies based on traffic patterns, order volume, and delivery deadlines.   09:42 Lois: So really, we’re using data science to solve everyday problems. Can you walk us through some practical examples of how it’s being applied?  Himanshu: The first one is predictive maintenance. It is done in manufacturing a lot. A factory collects real time sensor data from machines. Data scientists first clean and organize this massive data stream, explore patterns of past failures, and

    13 min
  8. 12 AUG

    Core AI Concepts – Part 1

    Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we’re going to explore some foundational AI concepts, starting with machine learning. After that, we’ll discuss the two main machine learning models: supervised learning and unsupervised learning. And we’ll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today’s episode. Hi Himanshu! Let’s dive right in. What is machine learning?  01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale.  01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience.  Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you’re saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let’s move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not.  04:55 Nikita: Ok, I think we need an example here. Let’s go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we’re with you so far. Let’s talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend.  08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to.  Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known.  08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own.  The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own infe

    20 min

About

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.

You Might Also Like