ADAPT Insider

ADAPT

This is ADAPT Insider. Proudly A/NZ first. For more than 15 years, ADAPT has empowered Australia and New Zealand’s executive community with trusted data, insights, and connections so leaders can make better decisions with confidence. Because this region is different. Our markets are unique. And the challenges facing enterprise leaders from legacy technology to transformation are only getting bigger. Each year, through in-depth research, benchmarking and executive-only events, ADAPT engages with over 2,000 senior leaders across the region’s most influential enterprise and government organisations. ADAPT Insider brings you inside those conversations. Real perspectives from technology and business leaders. Independent research grounded in local data. And practical intelligence you can actually use. This is not theory. It is insight for leaders driving modernisation. Welcome to ADAPT Insider. Built for A/NZ leaders. Backed by data. Designed to help you move with confidence.

Episodes

  1. Assessment is becoming the real AI challenge for universities, says the University of Sydney’s Interim CIO

    11 HR AGO

    Assessment is becoming the real AI challenge for universities, says the University of Sydney’s Interim CIO

    AI pressure is hitting universities differently from most organisations. It is being driven from the ground up by students, academics, and researchers already testing where AI helps and where it starts to distort learning. Kerry Holling, Interim CIO at the University of Sydney, explains how that pressure is changing governance, teaching, and trust across the institution. Key takeaways: Student behaviour is forcing universities to move faster on AI governance, with guardrails that protect privacy, sovereignty, and research integrity without slowing useful experimentation.Assessment design is becoming a bigger challenge than tool access, as universities work out how to test real understanding in an AI enabled learning environment.Trust in university AI depends on balance, with enough freedom to support research and learning, and enough control to protect rigour, accountability, and public confidence. Universities need guardrails that people will actually use AI governance in universities has to work in the real world. If the controls are too rigid, staff and students will route around them. If they are too loose, privacy, data sovereignty, and research integrity are exposed. That is why the University of Sydney has focused on practical guardrails developed jointly across IT and Legal, with self assessment tools that help staff judge use cases without turning governance into a bottleneck. Kerry’s point is that balance matters more in a university environment because academic work depends on openness and experimentation, while the institution still has to protect sensitive data, intellectual property, and research quality. The goal is to create enough structure to support safe use without shutting down the value AI can bring to research, teaching, and operations. The bigger teaching challenge is no longer access to AI, it is assessment The hard question for universities is no longer whether students will use AI.  The real issue is whether assessment still measures understanding, judgement, and learning in an environment where AI can generate convincing outputs quickly. That is where Kerry sees the pressure building. He argues that AI can improve learning when it helps students deepen their understanding, but weak assessment design will invite shortcuts instead. The stronger institutional response is to rethink how knowledge is tested so students still have to demonstrate real comprehension. He also points to examples where AI is improving access to teaching rather than undermining it. One University of Sydney academic built Cogniti to replicate parts of one to one support at scale, and Kerry says it is now used by more than 5,000 academics to help develop curriculum material and provide more personalised tuition to students. For him, that is what useful AI in education looks like, expanding learning support in places where human access is naturally limited. Trust grows when AI is used to augment people, not displace judgement Universities will get more value from AI when they treat it as a tool for augmentation, not as a substitute for human thinking, academic rigour, or institutional accountability. Kerry is optimistic about AI’s potential, but he is careful about how far that optimism should go. He supports AI for personal productivity and sees genuine value in tools that accelerate research, improve learning outcomes, and reduce friction in university work. At the same time, he is wary of over-dependence, concerned about the concentration of power in large technology companies, and clear that institutions should be selective about how they deploy AI. He describes it as something that should be used with respect, not submission, and that framing matters. The trust challenge in higher education is not only about policy. It is also about making sure AI strengthens human capability, protects academic judgement, and earns confidence as adoption becomes more embedded.

    14 min
  2. How the Australian government uses AI to solve complex public problems safely and at scale

    23 MAR

    How the Australian government uses AI to solve complex public problems safely and at scale

    What does safe scale look like when the cost of getting AI wrong is measured in public trust? In this ADAPT Insider podcast episode, Daniela Polit, Public Sector Transformation Executive, outlines a clear test for government AI. It should help solve complex public problems, reduce friction for citizens, and improve services at scale, while operating within guardrails that protect sovereignty, accountability, and trust. Key takeaways: AI should only be used when it clearly improves a public outcome, whether that means faster service, less friction, better inclusivity, or more efficient processing.Trust depends on keeping sovereignty, transparency, and human accountability intact, with AI used inside closed environments and final decisions always staying with people.Safer AI adoption comes from matching governance, training, and oversight to the level of public risk, rather than applying the same approach everywhere.Public value has to come before the technology Strong AI strategies begin with the problem being solved. In government, that means asking whether a tool can genuinely improve a service, shorten wait times, reduce bureaucracy, or make support easier to access for citizens. That is the lens Daniela applies throughout the conversation. She describes AI as a way to solve complex public sector problems, especially where service delivery involves scale, complexity, and large volumes of information. The value, in her view, comes from helping people deal with government faster and with less friction, whether that means reducing unnecessary touchpoints, improving transparency, or tailoring services more effectively across very different citizen needs. If AI can clearly improve the outcome, it has a case. If the likely value is marginal and the risk is higher, it should not be forced in. Trust holds when sovereignty and accountability are protected Public sector AI needs trust built into where models run, how data is handled, and who remains responsible for the outcome. Daniela makes that standard explicit. She says government models are hosted in closed internal environments, with the same rules and authorisations that already apply to public data carried through into AI use. She is equally clear that accountability stays with a person. A tool can support a decision, accelerate a process, or structure information more effectively, but it cannot replace the accountable decision maker. That combination, sovereignty, transparency, and human oversight, is what allows AI to be used in sensitive environments while preserving public confidence. Safer scaling depends on risk based governance and training AI becomes easier to scale when governance gives teams a clear way to assess value, manage risk, and move suitable use cases forward with confidence. That is how Daniela describes the public sector approach. She points to frameworks and assurance checks that run from ideation through implementation, testing, and evaluation, with policies evolving as use cases become more complex. She also makes clear that training should reflect the level of public impact. More structured education and tighter oversight are used for public facing or higher risk applications, while lower risk internal tools can be handled more flexibly. Even in a large and federated system where collaboration across agencies is still often informal, that discipline creates a stronger filter around value, accountability, and safe deployment.

    16 min
  3. What it takes to scale agentic AI in a regulated environment, according to CareSuper’s CTO

    9 MAR

    What it takes to scale agentic AI in a regulated environment, according to CareSuper’s CTO

    Agentic AI is creating new opportunities for efficiency and service improvement, but regulated organisations do not have the luxury of scaling it loosely. Governance, trust, and accountability have to mature alongside the technology. CareSuper CTO Simon Reiter talks about how the fund is balancing experimentation with regulatory obligations, internal adoption, and the controls needed to move AI safely into day to day operations. Key takeaways: Governance has to come before scale, with clear policies, standards, and risk controls in place before AI moves into production.AI adoption depends as much on trust and change management as it does on the technology itself.Production ready AI needs strong foundations, especially clean data, connected systems, and clear identity controls.Governance has to come before scaleRegulated organisations cannot treat AI as a tool first and a governance problem later. The controls need to be built early so promising use cases can move into production safely. Simon says CareSuper is running two streams in parallel, one focused on governance, ethics, standards, and regulatory alignment, and another focused on practical use cases from across the business. That creates a clearer path from pilot to production, while filtering out ideas that are really process issues rather than AI opportunities. AI adoption moves faster when people trust the changeAdoption does not come from rollout alone. People need to understand where AI fits, what it improves, and why it is being introduced. Simon says CareSuper invested heavily in change management before scaling usage across the organisation. Executives, general managers, and technology teams went through training supported by internal sessions and practical examples, helping staff build confidence and see AI as a way to remove low value work while keeping humans in the loop. Data, integration, and identity will decide what reaches productionThe hard part is not running a pilot. It is building the foundations that make AI safe and reliable once it starts acting across systems and workflows. He points to three essentials: data quality, integration capability, and identity management. Poor data can still produce convincing outputs, weak integration limits operational value, and unclear identity makes it harder to trace, audit, and govern agent activity. That is why CareSuper has invested in stronger identity controls and a central asset register to link agents to accountable owners and ongoing review.

    15 min
  4. ADAPT Advisors from Alyve, HivePix, and The Consulting CIO on why AI strategy breaks when companies start with the tool

    9 MAR

    ADAPT Advisors from Alyve, HivePix, and The Consulting CIO on why AI strategy breaks when companies start with the tool

    AI is moving into the core of how organisations operate, and that is changing the leadership task. Claudine Ogilvie, CEO at HivePix and former CIO at Jetstar, Mark Cameron, CEO and Director at Alyve, and Brett Raven, Fractional CTO and CIO at The Consulting CIO, examine how leaders are reworking governance, decision making, and accountability as AI becomes embedded across the business. Key takeaways: Boards are moving from broad AI guardrails to clearer risk appetite. Leaders are being forced to define where experimentation is encouraged, where tighter controls apply, and how much downside the organisation is willing to tolerate.The strongest AI programs start with a business problem. Fear of missing out is still driving poor sequencing, with many organisations choosing tools first and searching for value later.Agentic AI is turning governance into an operating model issue. As AI takes action across workflows, leaders need clearer ownership, stronger controls, and explicit accountability for outcomes. Boards need clearer risk appetite as AI moves deeper into the business AI is becoming a board level issue because it is starting to shape business value, operating models, and enterprise risk. Broad guardrails were a useful starting point, but they are no longer enough. Claudine argues that boards are now moving towards clearer policy and more explicit risk appetite. That matters because leaders need to define where experimentation is encouraged, where tighter control is needed, and how much downside the organisation is prepared to tolerate. As AI becomes more embedded in the business, governance has to become more specific and more actionable. The fastest way to waste AI investment is to start with the tool Many organisations are still approaching AI backwards, choosing the platform first and searching for the use case later. Brett warns that this approach is usually driven by fear of missing out rather than clear strategy. His argument is simple, start with the business problem, define the value to be created, then assess whether AI is the right fit. That discipline is what separates scattered experimentation from meaningful adoption. Agentic AI is turning governance into an operating model issue Once AI starts acting across workflows, governance becomes much more than a policy discussion. It becomes a question of ownership, access, supervision, and performance. Mark says leaders need to focus less on chasing every technical update and more on how AI is changing the structure of work itself. In practice, that means defining the role an agent is performing, the boundaries around its actions, and who remains accountable for its outcomes. As agentic AI spreads, organisations will need stronger controls and much clearer operating discipline. The organisations that move fastest will be the ones with the strongest discipline As AI moves from experimentation into execution, the organisations that pull ahead will be the ones that treat governance, ownership, and operating discipline as core parts of adoption, rather than problems to solve later.

    48 min
  5. AI should strengthen care, not replace human connection, says Uniting’s CDIO

    9 MAR

    AI should strengthen care, not replace human connection, says Uniting’s CDIO

    What happens when AI removes admin from frontline care without taking people out of the process? Andrew Dome, Chief Digital Information Officer at Uniting, explains how the organisation is using AI to reduce documentation friction, keep people in the loop, and build towards safer care outcomes. Key Takeaways: The strongest frontline AI use cases remove admin where care happens, giving staff more time back in the day without taking people out of the process.Trust grows faster when AI is introduced with clear guardrails, visible consent, and human oversight built into the workflow.A practical AI use case becomes far more valuable when it proves repeatable enough to scale internally and relevant enough for others to want it too.  Frontline AI works when it removes admin at the point of care The most valuable AI use cases in aged care sit inside frontline workflows, where they can remove admin at the point of service and return time directly to care. Andrew says their Azure/ChatGPT 5.0-powered AI assistant “Buddy” was designed to reduce documentation friction for frontline teams, especially in home and community care. Staff can use it to capture notes through voice transcription straight after visits instead of losing time later to manual write ups. For Uniting, that turns AI into a practical care workflow tool rather than another layer of system complexity. He also points to a stronger sign that Buddy has moved beyond experimentation. Other aged care providers have shown interest in adopting it as a software as a service platform, suggesting the tool is solving a repeatable frontline problem rather than a one off internal need. Trust depends on visible guardrails and human oversight AI in care settings cannot sit in the background as an invisible layer. People need to understand what it is doing, where it is being used, and what remains under human control. In residential care, that means staff explaining when AI transcription is being used, checking that clients are comfortable, and showing the output so it can be reviewed. Andrew argues that trust grows faster when consent, visibility, and quality assurance are built into the workflow itself. If AI transcription is being used in front of a client, staff explain what is happening, ask if the client is comfortable, and show the output so it can be checked. That makes the lesson broader than responsible AI as a slogan. In care environments, trust grows when consent, visibility, and quality assurance are built into the workflow itself. The next gains will come from safer and more responsive care With AI already reducing admin safely, the next opportunity is to extend its value into earlier action and more consistent care. Andrew links that next step to Uniting’s work on AI supported service interactions and fall prevention, where the aim is to help staff respond sooner, reduce risk, and strengthen continuity of care.

    13 min

About

This is ADAPT Insider. Proudly A/NZ first. For more than 15 years, ADAPT has empowered Australia and New Zealand’s executive community with trusted data, insights, and connections so leaders can make better decisions with confidence. Because this region is different. Our markets are unique. And the challenges facing enterprise leaders from legacy technology to transformation are only getting bigger. Each year, through in-depth research, benchmarking and executive-only events, ADAPT engages with over 2,000 senior leaders across the region’s most influential enterprise and government organisations. ADAPT Insider brings you inside those conversations. Real perspectives from technology and business leaders. Independent research grounded in local data. And practical intelligence you can actually use. This is not theory. It is insight for leaders driving modernisation. Welcome to ADAPT Insider. Built for A/NZ leaders. Backed by data. Designed to help you move with confidence.