PING

APNIC

PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.

  1. 1 DAY AGO

    BGP in review for 2025

    In this episode of PING, APNIC Chief Scientist Geoff Huston returns with his annual review of BGP, reflecting on developments across 2025. Geoff has been publishing this year-in-review analysis of BGP dynamics for more than a decade, and this time he has uncovered some genuinely surprising shifts. His 2025 analysis has been published in two parts on the APNIC Blog. Border Gateway Protocol (BGP) is the mechanism by which network operators announce their Internet address space to the rest of the world and, in turn, learn about the addresses announced by others. Operators participating in the global default-free zone receive all publicly announced routes, each expressed as an IP prefix and associated with its originating Autonomous System Number (ASN). Every BGP speaker has a unique ASN, and all routing information is exchanged and interpreted through this fundamental identifier. In effect, the ASN is the basic unit of interdomain routing. BGP also carries path information that describes how routing announcements traverse the network. This data informs routing policy decisions — which paths to prefer, and through which commercial or technical relationships. While the protocol itself is well understood, the system as a whole is anything but simple. When more than 100,000 ASes are continuously exchanging routing information, complexity is unavoidable. Speaking BGP is about telling things and learning things, but it’s also about deciding what to do with what has been learned. This is the work behind a router, and involves holding all the information and performing routing decisions on it, so the ‘size’ of the information shared and learned has a direct impact on the ‘cost’ of operating as a BGP speaker (cost here ultimately means memory and CPU). For most of the Internet’s history, BGP growth has been relentless, forcing operators to continually ask whether their current routing infrastructure can accommodate future growth. All technology adoption has a life cycle, and is often referred to as the ‘technology adoption curve’. New technologies start out expensive and scarce, become cheaper and widely adopted, and eventually reach a point of saturation where growth slows and replacement becomes the dominant driver. For much of its existence, the Internet has remained firmly in the rapid growth phase of this curve, with sustained increases in users, networks, and routing information. Geoff has detected changes in the pace of growth for both IPv4 and IPv6, which suggest the underlying economics behind investment in Internet, and growth in customers has reached it’s saturation point: We are entering a time where BGP growth may not have the same dynamics we’ve been used to, and questions about capital investment in BGP routing and underlying Internet Addressing are not the same.

    58 min
  2. 21 JAN

    NITK Students at IETF: Fresh Minds for standards development

    Welcome back to PING for 2026 and season 6. This time on PING, we have a pair of interviews with students from the National Institute of Technology Karnataka, Surathkal (NITK), recorded last year at IETF 122. This is the second time we've heard from students from NITK. We previously heard from Vanessa Fernandes and Kavya Bhat when they attended IETF 119 in 2024. NITK is a large, technically focused university located on India’s south-western coast in the state of Karnataka. The state is home to major technology hubs, such as Bengaluru and Mangaluru, alongside institutions like NITK, which play a key role in developing technical talent. Against this backdrop, it is unsurprising that NITK students show a strong interest in network technologies and Internet protocol development. Dr Mohit Tahiliani, Associate Professor at NITK, has led a multi-year program involving undergraduate, postgraduate, and postdoctoral researchers to engage with emerging Internet standards. Through this program, participants explore new ideas, contribute code, and take part in IETF hackathons and Working Group activities. This work has been supported in part by the APNIC Foundation. Last time with Vanessa and Kavya, we explored NITK’s multi-year campus IPv6 deployment, which has been underway for some time. That work has included direct engagement with the IETF, with Dr Mohit Tahiliani’s students attending alongside Nalini Eklins, who is involved both in the IPv6 deployment at NITK and in IPv6 standards work within the IETF. Since then, both students have gone on to work in networking roles or to pursue further study, reflecting the longer-term impact of sustained involvement in operational and standards-based Internet engineering. This time, we've got two different projects and NINE students to hear from. The first group is Rati Preethi Subramanian, Shriya Anil, Mahati Kalale, Anuhya Murki and Supradha Bhat, who explored fair queuing disciplines, FQ_Codel, a derivative FQ_Codel++ and a new proposed model, FQ_Pie. They worked with the NS3 network simulator and CCPerf, exploring how these queueing disciplines compare, and discussed their project with me at IETF 122. The second group are Vartika T Rao, Hayyan Arshad, Siddharth Bhat and Bharadwaja Meherrushi Chittapragada, who looked at the YANG data model in the network management space, and more efficient ways to manage data coming out of networking systems using YANG. They wrote a producer-consumer model in Python code, and explored time-series databases using interface packet count collections as an example YANG dataset to explore, in the CBOR encoding. Finally, I spoke with Dr Mohit Tahiliani, who has been leading this project. He is strongly committed to bringing new and younger voices into IETF work, recognizing the value of exposing students to real-world protocol development early in their careers. This experience benefits participants by grounding their learning in practical standards work, while also helping the IETF engage with new contributors who may return to protocol development in the future. This sustained engagement has already had tangible outcomes: The students involved have gone on to roles in the ICT sector or to further academic study, demonstrating the long-term value of this collaborative model.

    31 min
  3. 10/12/2025

    Going Dark: measurement when the Internet hides the detail

    In the final podcast for 2025, APNIC Chief Scientist Geoff Huston discusses the problem of independent measurement in an Internet which is increasingly “going dark”. Communications has always included a risk of snooping, and a matching component of work to enhance privacy, from the simplest ciphers used in ancient times, techniques of hiding and discovering messages, attempts to prevent and detect intrusion of the mail, to adoption of telegraph codes, the cutting of telegraph wires in wartime (to force messages into radio where they could be listened to) and the development of modern encryption algorithms typically using the public-private keypair model. There has always been a story of “attack” and “response” in how we communicate privately. Aside from matters of state security, banking and finance at large depend on a degree of privacy and now require it under legislation to enable use of creditcard information online. Many other contexts have an assumption of privacy, and use technology to try and preserve it. Fundamentally, individuals in their use of the Internet are entitled to expect a level of privacy where the state permits it. The publication of RFC7258 “Pervasive Monitoring Is an Attack in 2014 formalised a belief that the intrusion of third parties into a communication between two ends demanded a technology response to exclude them, where possible. Protocol designers and Internet Engineers took up the challenge. This position led over time to a marked increase in the adoption of privacy enhancing protocol features. For example, the web moved from HTTP: denoted URLs to HTTPS: where the content is protected by the Transport Layer Security (TLS) encryption protocol, which now overwhelmingly predominates in the at large. However, significant aspects of Internet communications “leak” information to third parties. Between an individual and a web service lies their provider, unknown numbers of intermediate providers, typically a content distribution system hosting the web site in a local copy, all of whom have opportunities to see and understand what is being done, and by whom. In particular the DNS typically exposes the name and address of the site being connected to across all kinds of protocols (not just the web) and exposes it to unknown intermediary systems as the DNS lookup is processed. In response to this, services are emerging which break down the DNS into dissociated queries: what is being looked for, and who is looking for it, and use intermediary services which may know one, but not both: Questions are seen to be asked, but by who is now hidden. If you know who is asking, you don’t know what they are asking for. Combined with newer network protocols like QUIC which imposes a strong end-to-end encryption model which even hides the inter-packet size and timing information (another form of leak which can be used to reconstruct what kind of traffic is flowing) it has become increasingly hard for an independent researcher to see inside the network: It’s going dark. Geoff explores the nature of privacy in the Internet at large, and how APNIC Labs gets round this problem with it’s measurement system. PING will return in January 2026 with another season of episodes. Until then, enjoy this final recording of 2025, and see you online, in the new year.

    54 min
  4. 12/11/2025

    the Realpolitik of undersea cables

    In this episode of PING, APNIC Chief Scientist Geoff Huston explores the complex landscape of undersea cables. They have always had a component of strategic interest, communications and snooping on communications has been a constant since writing was invented, and the act of connecting two independent nation states by a telegraph wire invokes questions of ownership and jurisdiction right from the start. After the initial physics of running a long distance wire to make an electric circuit was worked out, telegraph services became a vital part of a states economic and information gathering processes. This is why at the beginning of world war 1 and again in world war 2 the submarine cables linking europe out into the world were cut by the British Navy: forcing the communications flows into radio meant it was possible to listen in, and with luck (and some smart people) decode the signals. Modern day fibre optic communications are no different in this regard. Many incidents of cable cutting have simple explanations, not all paths the subsea cables run through are especially deep and in shallow waters near landfall with lots of fish, trawlers cause a lot of damage. But there is now good reason to believe state actors are also disrupting fiber communications by breaking links, and a strong trend now to direct which sources of equipment (from the physical fibre up to the active routing systems) are used for a landfall into any given economy. This in turn is influencing the flow of capital, and the paths taken by subsea fibre systems, as a result of the competing pressures.

    52 min
  5. 15/10/2025

    Geolocation and Starlink

    In this episode of PING, APNIC Chief Scientist Geoff Huston discusses a problem which cropped up recently with the location tagging of IP addresses seen in the APNIC Labs measurement system. For compiling national/economic and regional statistics, and to understand the experimental distribution into each market segment, Labs relies on the freely available geolocation databases from maxmind.com, and IPinfo.io -which in turn are constructed from a variety of sources such as BGP data, the RIR compiled resource distribution reports, Whois and RDAP declarations and the self-asserted RFC8805 format resource distribution statements that ISPs self publish. At best this mechanism is an approximation, and with increasing mobility of IP addresses worldwide it has become harder to be confident in the specific location of an IP address you see in the source of an internet dataflow, not the least because of the increasing use of Virtual Private Networks (VPN) and address cloaking methods such as Apple Private Relay, or Cloudflare Warp (although as Geoff notes, these systems do the best they can to account for the geographic distribution of their users in a coarse grained “privacy preserving” manner). Geoff was contacted by Ben Roberts of Digital Economy Kenya, a new boardmember of AFRINIC and long-time industry analyst and technical advisor. He’d noticed anomolies with the reporting of Internet statistics from Yemen, which simply could not be squared away with the realities of that segment of the Internet Economy. This in turn has lead Geoff to examine in detail the impact of Starlink on distribution of internet traffic, and make adjustments to his measurement Geolocation practices, which will become visible in the labs statistics as the smoothing functions work through the changes. Low Earth Orbit (LEO) Space delivery of Internet has had rapid and sometimes surprising effects on the visibility of Internet worldwide. The orbital mechanics mean that virtually the entire surface of the globe is now fully internet enabled, albiet for a price above many in the local economy. This is altering the fundamentals of how we “see” Internet use and helps explain some of the problems which have been building up in the Labs data model.

    51 min
  6. 01/10/2025

    Measuring RSSAC047 Conformance

    RSSAC047 - a document from the Root Server System Advisory Committee proposed a set of metrics to measure DNS root servers, and the DNS root server system as a whole. the document was approved in 2020, and ICANN worked on an implementation of the metrics as code, and a deployment into 20 points of measurement distributed worldwide. ISC and Verisign, two of the root server operators proposed a review of this measurement and retained SIDN Labs (who are part of the Dutch body operating .NL as a CountryCode Top-Level Domain or ccTLD) to look into how well the measurement was performing. In this episode of PING, Moritz Mullër from SIDN Labs and Duane Wessels from Verisign respectively, discuss this "measurement of the measurement" exercise, what they found out, and what it may mean for the future of metrics at the DNS Root. It's an interesting "meta conversation" about measuring things which themselves are measurements. We see this all the time in the real world, for example diagnostic imaging machines designed to measure bone density (for osteoporosis checks) require calibration, and when you want to compare a baseline over time that calibration and the specific machine become questions the clinician may want to check, assessing the results. Change machine, you get different sensitivity. So how do you line up the data? Moritz's investigations show that in some respects, the ICANN implementation of RSSAC047 was incomplete, and didn't tell an entirely accurate story about the state of the DNS Root Server System. Also, there are questions of scale and location which means a re-implementation or future improvement is worth discussing.

    31 min

About

PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.