47 episodes

Exploring the intersection of technology and international relations from an Indian national interest perspective.

hightechir.substack.com

Technopolitik Pranay Kotasthane

    • Notícias

Exploring the intersection of technology and international relations from an Indian national interest perspective.

hightechir.substack.com

    #49 US-India's High-Tech Talks, and Concerns surrounding TikTok.

    #49 US-India's High-Tech Talks, and Concerns surrounding TikTok.

    Last week saw a flurry of technopolitical developments as the US and India announced a slew of technology and defense deals. In case you missed it, we had a special post dissecting the preliminary details of India’s accession to the Artemis Accords. Check it out here! Also tune in to this podcast episode of All Things Policy, where Pranay Kotasthane, Aditya Ramanathan, Bharath Reddy, and Saurabh Todi from the High-Tech Geopolitics team discuss the announcements in the India-US joint statement in the field of Semiconductors, Advanced Telecommunications, and Space.
    Matsyanyaaya 1: Concerns Surrounding TikTok and the Future of ‘Project Texas’
    — Anushka Saxena
    On June 16, the CEO of the controversial Chinese media platform TikTok, Shou Zi Chew, sent a letter to US Congress Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.), responding to their questions about the company surrounding the storage of data of American users on the platform. In his testimony to a House Committee of the US Congress, Shou had previously stated that "American data has always been stored in Virginia and Singapore." But now, a Forbes investigation from late May has revealed that this may not entirely be true. This investigation prompted said Senators to seek answers from TikTok, and Shou's letter has confirmed said suspicions.
    What did Forbes' investigation say?
    On May 30, Forbes published a report arguing that "over the past several years, thousands of TikTok creators and businesses around the world have given the company sensitive financial information—including their social security numbers and tax IDs—so that they can be paid by the platform. But unbeknownst to many of them, TikTok has stored that personal financial information on servers in China that are accessible by employees there, Forbes has learned."
    Further, their report argued: "TikTok uses various internal tools and databases from its Beijing-based parent ByteDance to manage payments to creators who earn money through the app, including many of its biggest stars in the United States and Europe. The same tools are used to pay outside vendors and small businesses working with TikTok. But a trove of records obtained by Forbes from multiple sources across different parts of the company reveals that highly sensitive financial and personal information about those prized users and third parties has been stored in China. The discovery also raises questions about whether employees who are not authorized to access that data have been able to. It draws on internal communications, audio recordings, videos, screenshots, documents marked "Privileged and Confidential," and several people familiar with the matter."
    …And what has Shou said in his letter to Blackburn and Blumenthal?
    The point of the letter is to confirm that over the past year, TikTok has collaborated closely with Oracle to implement various measures to enhance the protection of the application, systems, and the security of data belonging to its users in the United States. 
    They also announced that in January 2023, they had achieved a significant milestone in this endeavour – the default storage location for US user data has been changed. All US user traffic is currently being directed to Oracle Cloud Infrastructure. While TikTok's data centres in the US and Singapore are still utilized for backup purposes, the company's ongoing efforts involve removing US users' private data from our data centres. Their objective is to fully transition to Oracle cloud servers in the United States, for which, as of March 2023, they have also started deleting previously stored data from foreign servers.
    But the controversy has arisen from the second main iteration of the letter, which reads: "TikTok has been clear that there are certain, limited exceptions to the definition of protected data. These exceptions are in place to help ensure interoperability of TikTok as a global platform and were determined as part of TikT

    • 29 min
    #47 Of Measured Cyberspace Regulations and Lofty Space Ambitions

    #47 Of Measured Cyberspace Regulations and Lofty Space Ambitions

    Matsyanyaaya: Insights from recent OEWG discussions on Information and Communications Technologies
    — Anushka Saxena
    The militarisation of cyberspace is a reality. And to enable states to discuss and adopt common rules for global governance of cyberspace, on 31 December 2020, the United Nations General Assembly adopted resolution 75/240  establishing an Open-ended Working Group (OEWG) on the security of and in the use of Information and Communications Technologies. The mandate for the Group extends from 2021 to 2025.
    The Group recently concluded its informal, inter-sessional meetings on 26 May, and deliberations put forth by various states give us some insights into the kind of talking points we could look out for during the fifth Substantive Session of the Group, scheduled for July 2023.
    To summarise, various stakeholders, ranging from governments and representatives of UN bodies to scholars from think tanks and technology corporations, submitted ideas about what the 2023 Annual Progress Report (APR) should entail. All of their ideas either build on or expand what has already been discussed in the previous substantive and informal sessions in 2023 or the 2022 APR. Some interesting ideas are as follows:
    * Iran submitted a Working Paper on establishing a provisional directory of 'Points of Contact' (PoCs) on ICT and cybersecurity.
    ●     The first proposal to develop such a global directory was tabled in the UN Governmental Group of Experts Reports of 2013 (A/68/98). Now, every GGE and OEWG discussion notes progress on the directory.
    ●     The aim of this directory shall be for states to appoint field experts in technical or diplomatic positions (or both), which would be a part of a global PoC network debating everything from responsible state behaviour in cyberspace and the applicability of international law to defining threats to ICT.
    ●     As we know, the current Indian government has quite a knack for portals, and to formalise the creation of a PoCs global directory, India, too, has proposed the creation of a Global Cyber Security Cooperation Portal. The proposal, submitted by India's Permanent Representative in New York in July 2022, states that such a Portal shall be voluntarily updated by states and maintained by the UN Office for Disarmament Affairs.
    * The UNOCT/UNCCT and the UN Counter-Terrorism Committee Executive Directorate presented proposals for 'capacity building'. The proposal by the former was basically about glorifying the successes of its Global CounterTerrorism Programme on Cybersecurity and New Technologies. But the latter proposal, presented by the UNCTED, emphatically highlights the challenge of malicious online activity by rogue non-state actors and how existing counter-terrorism infrastructure can be leveraged to deal with it.
    ●     The important recommendation is to develop comprehensive training programmes for law enforcement personnel and criminal justice practitioners working with digital evidence. The mention of the latter may be an important signal of more private sector participation in navigating the legalities of what constitutes 'terrorism' in cyberspace.
    * Submissions from the private sector mainly highlighted which governmental proposals are the most crucial for focus on in the next substantive session and how they can be expanded or narrowed down:
    ●     Stimson Center's submission iterated that the two major emerging technologies states should agree on are common threats to ICT Security are ransomware and Artificial Intelligence.
    ●     It should be noted that both El Salvador and Czechia had made statements during the last substantive session in March on the need for developing standards on 'responsible state behaviour' in new and emerging tech like AI and Quantum. But these efforts would be futile until states can first agree on what harmful use of AI/ Quantum is, given the dual nature of such technologies, and then move on to standard-setting.
    ● 

    • 21 min
    #46 Numerology of conflict and cooperation in technology

    #46 Numerology of conflict and cooperation in technology

    Biopolitik: The Power of Four: Biomanufacturing and the Quad
    — Saurabh Todi
    A biological revolution is underway in global manufacturing. Products produced from genetic engineering and biomanufacturing techniques are replacing many chemical, industrial and farm-based products. According to a 2020 McKinsey report, the substitution of chemical products with biological alternatives through modern biotechnology has the potential to produce up to 60 per cent of the physical inputs required by the global economy. Similar modern biotechnology efforts are underway for milk, meat, pharmaceuticals, oils and numerous other industries. Individually, these industries are worth many billions or trillions of dollars. Combined, they make biotechnology one of the most economically lucrative emerging technologies.
    However, beyond the obvious economic value, there is significant strategic and social value in modern biotechnologies. The products produced by modern biotechnology are or will be essential for producing food, energy, and health management. Those that control the IP and supply chains will potentially control key determinants of society’s technological progress. There are also numerous potential military applications for biotechnology that range from food security to new, lightweight polymers to understanding the potential of highly effective biological weapons (which are banned under international law).
    Given the immense economic and strategic importance of these technologies, it is vital that countries do not place themselves in a vulnerable position. The Quad has sought to address this potential vulnerability by establishing a Critical and Emerging Technology Working Group that will monitor trends in critical and emerging technologies, such as synthetic biology, genome sequencing, and biomanufacturing, and also identify opportunities for cooperation within Quad.
    China plans to establish its dominance in biomanufacturing as well. In a Chinese government document on building the bioeconomy, a central theme was biomanufacturing at scale, including plastics, oils and agri-food technology. The ASPI critical technology tracker shows that academics in China publish more of the top 10% of most-cited academic papers for biomanufacturing than in any other country. Given China’s track record in establishing a lead in several emerging technologies, There’s good reason to believe China will build its biomanufacturing base faster than its competitors.
    To capitalise on the economic potential of the biomanufacturing industry and address potential supply chain vulnerabilities, we recommended that Quad countries establish a biomanufacturing hub in India. The proposed Quad-led hub would invest in three main areas: strengthening physical infrastructure, bolstering workforce capabilities, and identifying opportunities for collaboration.
    Researchers at the Takshashila Institution, Saurabh Todi and Shambhavi Naik, along with researchers at Australian National University, Dirk van der Kley and Daniel Pavlich, have explored this idea in detail in a recently published as a Discussion Document. The recommendation was published as op-eds in publications like ASPI Strategist.
    Matsyanyaaya: Preparing for the quantum leap
    — Rijesh Panicker
    The National Mission for Quantum Technologies and Applications (NM-QTA) seeks to strengthen India’s research and development ecosystem in various quantum technologies like quantum communications, quantum computing, quantum sensing and quantum materials. It will also look to build 50-100 qubit quantum computers within the next 5-8 years.
    With an outlay of ₹6,000 crores over the next eight years, NM-QTA represents a significant step forward from the Quantum Enabled Science and Technology (QueST) research program, funded by the Department of Science and Technology (DST) for ₹80 crore.
    India has also sought international collaboration in this area. Among these is a partnership between the National Science Foundati

    • 16 min
    #45 Davids and Goliaths in the world of tech

    #45 Davids and Goliaths in the world of tech

    Cyberpolitik: AI and Crime Prevention: Is it a force multiplier?
    — Satya Sahu
    Crime prevention is based on the idea that crime can be reduced or eliminated by modifying the factors that influence its occurrence or consequences. We can classify “prevention” into three main types: primary, secondary, and tertiary. Primary prevention addresses the root causes of crime or deters potential offenders before they commit a crime. Secondary prevention aims to intervene with at-risk groups or individuals to prevent them from becoming involved in crime. Finally, tertiary prevention efforts seek to rehabilitate or punish offenders to prevent them from reoffending. (This, however, is beyond the scope of today’s discussion.)
    Flipping the coin, we notice that policing is based on the idea that law enforcement and public order can be maintained by enforcing the law and responding to crimes or incidents. Policing also lends itself to being classified into two main types: reactive and proactive. Reactive policing responds to reported crimes or incidents after they occur. Proactive policing anticipates or prevents crimes or incidents before they occur. On the face of it, AI can help us prevent and fight crime by enhancing both types of crime prevention and policing.
    AI can digest and analyse petabytes of data from disparate sources, such as social media, CCTV footage, sensors used in our Smart Cities™, and boring old digitised government records, to identify patterns, trends, and anomalies that can indicate potential criminal activity. For example, the police in Vancouver use predictive models to identify areas where robberies are expected to occur and then post officers to deter potential thieves or other criminals. Similarly, the police in Los Angeles use a system called PredPol that generates maps of hotspots where crimes are likely to happen based on past data. These systems can help the police allocate their resources more efficiently and effectively and reduce crime rates and response times.
    When it comes to collecting and processing evidence, such as fingerprints, DNA, facial recognition, voice recognition, and digital forensics etc., we can look at the UK Home Office’s VALCRI, which uses AI to analyse large volumes of data from different sources, such as crime reports, witness statements, CCTV footage, and social media posts, to generate hypotheses and leads for investigators. For example, the police in India used ML-backed facial recognition technology to reunite thousands of missing children with their families. Moreover, AI can help the police in presenting evidence and arguments in court, such as using natural language processing to generate concise summaries or transcripts of testimonies or documents.
    It could augment efforts to monitor and evaluate police performance and conduct, such as using dashcams, bodycams, or drones to record their interactions with the public and/or suspects. For example, the police in New Orleans developed a program called EPIC that uses AI to analyse video footage from bodycams to identify instances of misconduct or excessive force by officers. It can also help the police in engaging with the public and building trust and confidence, such as using chatbots or social media platforms to communicate with citizens and provide critical information services, hopefully unlike the chatbot from my bank’s beleaguered website.
    However, all this has enormous implications for the jurisprudential underpinnings of crime prevention and policing. One such significance arises when AI itself can change the nature and scope of crime and criminality. AI can enable new forms of crime that exploit its capabilities and vulnerabilities, such as cyberattacks, biometric spoofing, deepfakes, autonomous weapons, or social engineering. Unlike their current-crime counterparts, leveraging AI allows these future crimes to be more sophisticated, scalable and anonymous than conventional ones. Therefore, the legal and ethi

    • 21 min
    #44: AI Misinform, US-India Cooperate and ISRO Reuse

    #44: AI Misinform, US-India Cooperate and ISRO Reuse

    Cyberpolitik: The Gell-mann “AI”mnesiac Effect
    — Satya Sahu
    Here are two screenshots of a hastily written prompt to which ChatGPT dutifully responded almost immediately.
    As I read the responses to my prompts, I was painfully aware of the fact that the second passage could very plausibly be attached alongside a doctored image of a scientist holding up a processor die and forwarded countless times on Whatsapp by thousands of my fellow citizens, all overjoyed at the prospect of India finally having become a semiconductor nation. These persuasively written passages contain no usual hallmarks of a shoddy copypasta-like questionable grammar and syntactical errors. The issue evident to anybody familiar with the global semiconductor value chain is that unless the reader of these passages also knows that efforts to produce an indigenous x86 processor are non-existent, they would not be able to discern the falsehood.
    While AI can generate realistic and useful content for entertainment, education, research, and communication, it can also produce and disseminate misinformation, propaganda, and fake news. Misinformation is false or inaccurate information that is deliberately or unintentionally spread to influence people’s beliefs, attitudes, or behaviours. Misinformation can have serious negative impacts on individuals and society, such as eroding trust, polarizing opinions, undermining democracy, and endangering public health and safety.
    One of the challenges of combating misinformation is that people are often vulnerable to cognitive biases that impair their ability to evaluate the credibility and accuracy of information. One such bias is the Gell-Mann Amnesia effect, coined by Michael Crichton and named after the Nobel Prize-winning physicist Murray Gell-Mann. The Gell-Mann Amnesia effect describes the phenomenon of an expert believing news articles on topics outside of their field of expertise even after acknowledging that articles written in the same publication that are within the expert’s field of expertise are error-ridden and full of misunderstanding. For example, a physicist may read an article on physics in a newspaper and find it full of errors and misconceptions but then turn the page and read an article on politics or economics and accept it as factual and reliable.
    The Gell-Mann Amnesia effect illustrates how people tend to forget or ignore their prior knowledge and experience when they encounter new information that is presented by a seemingly authoritative source. This effect can be exploited by AI-generated misinformation, which can mimic the style and tone of reputable media outlets and create convincing content that appeals to people’s emotions, biases, and expectations. AI-generated misinformation can also leverage social media platforms and networks to amplify its reach and influence by exploiting algorithms that favour sensationalism, novelty, and popularity over quality, accuracy, and relevance.
    Another challenge in combating misinformation is that large language models (LLMs), the main technology behind AI-generated content, are biased and incomplete. LLMs are trained on massive amounts of text data collected from the internet, which reflect the biases and gaps present in society and culture. LLMs learn to reproduce and amplify these biases and gaps in their outputs, which can lead to harmful and misleading content. One type of bias that LLMs can perpetuate is second-order bias, which is the bias that arises from the way data is organized, categorized, and represented. Second-order bias can affect how LLMs understand and generate information, such as classifying entities, assigning attributes, inferring relationships, and constructing narratives. These can also affect how LLMs interact with users, such as how they respond to queries, provide feedback, and adapt to preferences.
    Second-order bias can make misinformation more problematic at scale because it can affect not only the content but

    • 14 min
    Technopolitik Special Issue: The untaken road towards AI

    Technopolitik Special Issue: The untaken road towards AI

    A new discussion document authored by Shailesh Chitnis provides a pragmatic assessment of India’s capabilities in Artificial Intelligence (AI) today. It proposes one bold idea which, if properly executed, has the potential to catapult the country into a dominant position in the AI race. But why another document about AI strategy for India? The expert from the document is provided below.
    Most reports on AI in India follow a predictable pattern. First, they fuss over the potential of AI to alter every aspect of society and the economy. Next, they present eye-watering numbers on the impact of AI on India’s economy. Finally, there’s a mild caution against missing out on this once-in-a-generation boom.
    Left unsaid are the steps needed to get there. This is not such a report. It assumes that the reader is astute enough to know the transformational nature of AI. The reader also agrees that over time, this general-purpose technology will permeate every aspect of our lives. The extent of change depends on how successful we are in adopting this technology. But no one, this report hopes, needs to be convinced of the potential pay-off with AI. Instead, this short paper is focused on that space between strategy and outcome, namely execution. It deliberately takes a near-term – three to five years – view in its analysis, since the intent is to spur action.
    The problem: Staying behind in the AI race
    India is languishing at the bottom of the artificial intelligence (AI) leaderboard when compared with its G20 peers. Other than exporting our best brains, our contributions have been tiny. Even as the gap between the United States and China on one side, and everyone else on the other widens, India's policymakers, researchers, and business leaders have shown little urgency.

    The first AI strategy document by the government was released in 2018, a year or so after China had released its detailed, target-linked AI plan. Five years later, India is still in the strategy and consultation phase, while China has left us behind.
    We need to shift gears. Our research surveyed the state of AI in India and evaluated various policy options. While there are many recommendations that can be made, we prefer those that are immediate and agile.
    Our big idea: BharatAI
    AI is mainstream. And, as the preceding sections have demonstrated, India needs to catch up. Fast. Industry leaders can wait for guidance from the government on a roadmap, with defined milestones, ample funds, and coordinated action among industry, the public sector, and academia. But India is not China. Disruptive change will come from the private sector. One approach is to launch a privately funded research lab that works on foundational models for AI. We call this lab BharatAI.

    This company, BharatAI, has the potential to become the hub of India's AI innovation ecosystem. Our initial estimate calls is for an investment of roughly $250 million over five years. But an unproven company that requires over $250 mn over five years with no defined product or revenue won’t be flush with investor cash. The mismatch between high upfront costs and a long horizon to recoup the investments, requires patient capital. Hence we propose a pooled investment approach. Similar to a venture capital (VC) fund, BharatAI’s investors will resemble limited partners (LPs) that park their money into this venture for a defined period, say 10 years. In return, they buy equity into the firm but are not involved in the company’s management.
    Investors into this company can be of three types.
    a. Strategic investment from India's large technology services companies
    b. Venture capital funds
    c. Private endowments
    The company will also have two other backers who will be critical for its success: a platform partner and the government.
    The company itself would focus on foundational AI problems with broad applicability. BharatAI should not attempt to develop end-to-end applications. It should instead provide tools t

    • 6 min

Top Podcasts In Notícias

O Assunto
G1
Foro de Teresina
piauí
Xadrez Verbal
Central 3 Podcasts
Medo e Delírio em Brasília
Central 3 Podcasts
the news ☕️
waffle 🧇
Petit Journal
Petit Journal