Number of results to display per page
Search Results
102. Artificial intelligence: Europe needs to start dreaming again
- Author:
- Niccolò Bianchini and Lorenzo Ancona
- Publication Date:
- 11-2023
- Content Type:
- Working Paper
- Institution:
- Robert Schuman Foundation (RSF)
- Abstract:
- To date, there is no universally accepted definition of artificial intelligence (AI). However, a number of definitions do capture the fundamental aspects, such as the one updated by the OECD, from an earlier version in 2019, which will probably be integrated into the law on artificial intelligence now under discussion in the European Union. This definition stipulates: “An AI system is one that is based on a machine which, for explicit or implicit purposes, deduces, from the inputs it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptability once deployed”. This draws a broad perimeter, which is useful for framing the largest number of existing AI technologies and, although it does not consider the human component, it clarifies the operation of AI technologies as statistical inference systems which, from the processing of inputs, generate different types of output. But this definition ignores other essential elements of artificial intelligence. As immediate as it is provocative, it must be made clear that AI is neither intelligent nor artificial. As Kate Crawford, co-founder of the University of New York’s AI Now Institute, artificial intelligence is not a synthetic product but “comprises natural resources, fuels, human labour, data, infrastructure and classifications”, which illustrate the extent to which economic, political and technical dynamics are central. As far as intelligence is concerned, Luciano Floridi, founder of the Digital Ethics Center of Yale University, speaks of “artefacts with the ability to act without being intelligent”, i.e. to carry out tasks without having any autonomous consciousness, but solely thanks to computing power and statistical inferences. These details are not mere philosophical flights of fancy, but the premises of a conscious reflection on the subject. While the general public discovered AI with ChatGPT a year ago, it is worth pointing out that generative AI, which forms its base, is just one of the different types of AI, and that various AI applications were already present in our daily lives. These include the algorithms in social networks used to recommend content, predictive analysis for finance or programmes for diagnosing and personalising therapies in medicine. Once these doubts have been dispelled, some of the considerations raised so far can be assessed. The transformative nature of this technology is clear : AI has the potential to revolutionise different areas of human experience and, more profoundly, to change reality and the very role of human beings within it[1]. Among the various fields of application, the world of work is under particular scrutiny, with several analyses which forecast major business changes and a significant impact on productivity. The challenge of AI has recently been taken up by the international community, notably at the regulatory level. Great effort is being made to ensure future-proof legislative action, given the extremely rapid evolution of AI technologies, in particular the rise of fundamental models. Although a number of recent initiatives, such as the Executive Order issued by American President Biden, the summit on AI security at Bletchley Park, the G7’s Code of Conduct and the world AI governance initiative launched by China, reflect the growing awareness of this issue, the European Union is proving to be a forerunner in regulating this technology with the law on AI. This is why, according to Anu Bradford, a theoretician behind the Brussels Effect “it is reasonable to expect that the European regulatory role will continue. What's more, the European Union is influencing other players in this field: the United States is steadily moving towards the European model, abandoning one that is purely libertarian”[2]. Beyond this, the industrial development of AI technologies has been underway for some time, and is shaping up to be a new arena of fierce confrontation between global players. Indeed, it is likely that the market-led US model, the state-led Chinese model and the rights-led European model will clash not only in a global regulatory challenge, but also in terms of technological development.
- Topic:
- Development, Science and Technology, Regulation, and Artificial Intelligence
- Political Geography:
- Europe
103. Addressing the Liability Gap in AI Accidents
- Author:
- Amrita Vasudevan
- Publication Date:
- 07-2023
- Content Type:
- Policy Brief
- Institution:
- Centre for International Governance Innovation (CIGI)
- Abstract:
- When a device controlled by artificial intelligence (AI) is involved in an accident causing bodily harm, who is at fault? The manufacturer, software developer or the user? With so many people involved in the design and production of AI-controlled technology (known as the “problem of many hands”), liability can be difficult to attribute, leaving victims and their families without recourse. As AI becomes increasingly entrenched in our daily lives, mishaps will become more common. This policy brief looks at the challenges in determining who is at fault when AI goes awry and how to address the liability gap.
- Topic:
- Regulation, Artificial Intelligence, Digital Policy, Emerging Technology, and Liability
- Political Geography:
- Global Focus
104. AI-Related Risk: The Merits of an ESG-Based Approach to Oversight
- Author:
- Mardi Witzel and Niraj Bhargava
- Publication Date:
- 08-2023
- Content Type:
- Working Paper
- Institution:
- Centre for International Governance Innovation (CIGI)
- Abstract:
- Artificial intelligence (AI) technology has been found to generate value for many firms; however, it also has unintended and undesirable consequences. The reality of AI-related risk has led to the development of AI governance frameworks and calls for greater oversight of the use of AI. The merits of an ESG (environmental, social, governance)-based approach to oversight of AI-related risk are considered in this paper, with a focus on the current trajectory of international sustainability standards development. Despite their differences, AI governance and ESG reporting both seek to address risk in the broadest sense, with proactive and transparent approaches to its management and mitigation. A preliminary set of recommendations for incorporating material AI-related risk into ESG reporting, covering both general or context-setting disclosures and industry-specific disclosures, is provided.
- Topic:
- Regulation, Artificial Intelligence, Industry, Oversight, and Emerging Technology
- Political Geography:
- Global Focus
105. How to Curb Investments in Chinese Technology: Initiatives and Debates in the United States
- Author:
- Mathilde Velliet
- Publication Date:
- 08-2023
- Content Type:
- Policy Brief
- Institution:
- Institut français des relations internationales (IFRI)
- Abstract:
- In a continuation of U.S. efforts to slow China's development and acquisition of strategic technologies, Washington has imposed new restrictions on American investment in Chinese technology sectors such as artificial intelligence (AI), quantum, and semiconductors. On August 9, 2023, President Biden issued an unprecedented executive order, announcing notification requirements and prohibitions for U.S. tech investments in China. Three sectors are targeted: semiconductors, quantum information technology, and artificial intelligence. Washington’s main concern is that these investments are accompanied by technology transfers and “intangible benefits” contributing to the development of Chinese military capabilities. This concern, first voiced in the 1980s, has received a heightened political response since 2018. Considering the emerging consensus in Congress and the multiplication of bills, further measures are likely to be adopted in the coming months. The European Union and certain Member States are beginning to reflect on the consequences of U.S. measures, and on the relevance of adopting their own tools to control investment in China.
- Topic:
- Science and Technology, European Union, Investment, Artificial Intelligence, Quantum Computers, and Semiconductors
- Political Geography:
- China, Europe, Asia, and United States of America
106. China’s AI Regulations and How They Get Made
- Author:
- Matt Sheehan
- Publication Date:
- 07-2023
- Content Type:
- Working Paper
- Institution:
- Carnegie Endowment for International Peace
- Abstract:
- China is in the midst of rolling out some of the world’s earliest and most detailed regulations governing artificial intelligence (AI). These include measures governing recommendation algorithms—the most omnipresent form of AI deployed on the internet—as well as new rules for synthetically generated images and chatbots in the mold of ChatGPT. China’s emerging AI governance framework will reshape how the technology is built and deployed within China and internationally, impacting both Chinese technology exports and global AI research networks. But in the West, China’s regulations are often dismissed as irrelevant or seen purely through the lens of a geopolitical competition to write the rules for AI. Instead, these regulations deserve careful study on how they will affect China’s AI trajectory and what they can teach policymakers around the world about regulating the technology. Even if countries fundamentally disagree on the specific content of a regulation, they can still learn from each other when it comes to the underlying structures and technical feasibility of different regulatory approaches. In this series of three papers, I will attempt to reverse engineer Chinese AI governance. I break down the regulations into their component parts—the terminology, key concepts, and specific requirements—and then trace those components to their roots, revealing how Chinese academics, bureaucrats, and journalists shaped the regulations. In doing so, we have built a conceptual model of how China makes AI governance policy, one that can be used to project the future trajectory of Chinese AI governance (see figure 1). China’s three most concrete and impactful regulations on algorithms and AI are its 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis (synthetically generated content), and the 2023 draft rules on generative AI. Information control is a central goal of all three measures, but they also contain many other notable provisions. The rules for recommendation algorithms bar excessive price discrimination and protect the rights of workers subject to algorithmic scheduling. The deep synthesis regulation requires conspicuous labels be placed on synthetically generated content. And the draft generative AI regulation requires both the training data and model outputs to be “true and accurate,” a potentially insurmountable hurdle for AI chatbots to clear. All three regulations require developers to make a filing to China’s algorithm registry, a newly built government repository that gathers information on how algorithms are trained, as well as requiring them to pass a security self-assessment. Structurally, the regulations hold lessons for policymakers abroad. By rolling out a series of more targeted AI regulations, Chinese regulators are steadily building up their bureaucratic know-how and regulatory capacity. Reusable regulatory tools like the algorithm registry can act as regulatory scaffolding that can ease the construction of each successive regulation, a particularly useful step as China prepares to draft a national AI law in the years ahead. Examining the roots of these regulations also grants insight into the key intellectual and bureaucratic players shaping Chinese AI governance. The Cyberspace Administration of China (CAC) is the clear bureaucratic leader in governance to date, but that position may grow more tenuous as the focus of regulation moves beyond the CAC’s core competency of online content controls. The Ministry of Science and Technology is another key player, one that may see its profile rise due to recent government restructuring and increased focus on regulating underlying AI research. Feeding into this bureaucratic rulemaking are several think tanks and scholars, notably the China Academy for Information Communications Technology and Tsinghua University’s Institute for AI International Governance. In the years ahead, China will continue rolling out targeted AI regulations and laying the groundwork for a capstone national AI law. Any country, company, or institution that hopes to compete against, cooperate with, or simply understand China’s AI ecosystem must examine these moves closely. The subsequent papers in this series will dig into the details of these regulations and how they came about, deepening understanding of Chinese AI governance to date and giving a preview of what is likely coming around the bend.
- Topic:
- Foreign Policy, Science and Technology, Cybersecurity, Regulation, and Artificial Intelligence
- Political Geography:
- China and Asia
107. The sixth domain: The role of the private sector in warfare
- Author:
- Franklin D. Kramer
- Publication Date:
- 10-2023
- Content Type:
- Special Report
- Institution:
- Atlantic Council
- Abstract:
- The United States and its allies have for some time recognized, as NATO doctrine provides, five operational domains—air, land, maritime, cyberspace, and space.1 Each of those arenas fully fits with the understanding of a domain as a “specified sphere of activity” and, in each, militaries undertake critical wartime actions. But in the ongoing Ukraine-Russia war, certain key operational activities have been undertaken by the private sector as part of the conduct of warfare.2 By way of example, private-sector companies have been instrumental both in providing effective cybersecurity and in maintaining working information technology networks. As part of such efforts, these firms have established coordinated mechanisms to work with relevant government actors. These operational and coordinated activities by the private sector demonstrate that there is a “sixth domain”—specifically, the “sphere of activities” of the private sector in warfare—that needs to be included as part of warfighting constructs, plans, preparations, and actions if the United States and its allies are to prevail in future conflicts. As will be elaborated below, that sphere of activities focuses mainly on the roles of information and critical infrastructures, including their intersections—ranging from the transmission and protection of information to the assurance of critical infrastructure operations. Many of the United States’ activities in the sixth domain will take place in the United States homeland. However, while “defending the homeland” is listed as the first priority in the 2022 National Defense Strategy, insufficient attention has been paid to the actions that will be required of the private sector beyond just the defense industrial base as part of accomplishing an effective defense.3 Likewise, when US military forces are engaged in overseas combat, private-sector companies in allied countries (as well as US companies operating overseas) will be critical for the effectiveness of US forces, as well as for the allies’ own militaries. In short, establishing an effective strategy for the private sector in warfare is a key requirement for the United States and its allies.
- Topic:
- Cybersecurity, Drones, Artificial Intelligence, Private Sector, Defense Industry, and Russia-Ukraine War
- Political Geography:
- Russia, Europe, Ukraine, and United States of America
108. Artificial Intelligence and the Clustering of Human Capital: The Risks for Europe
- Author:
- Erik van der Marel
- Publication Date:
- 12-2023
- Content Type:
- Working Paper
- Institution:
- European Centre for International Political Economy (ECIPE)
- Abstract:
- Europe trails the global frontier of productivity growth and the region’s trend is sluggish. Much prospective economic growth for Europe is likely to come from AI and its adoption by European firms which is projected to shoot up the productivity trend. For such AI-generated growth to work, high levels of human capital need to be available for firms, in particular Science, Technology, Engineering, and Mathematics (STEM) graduates. However, much of this human capital for AI is leaving Europe and the region experiences a net outflow when it comes to the skills required to make AI work. Moreover, the spread of both human capital and AI is very uneven within Europe, with some in the West spearheading whilst others in the East are trailing. This paper shows that those regions in Europe that successfully have invested in human capital in the past explain more than one-third of AI adoption by their firms a decade later. Furthermore, this persistent trend is driven by the most human capital-intense firms as they explain more than 50 percent of the observed adoption of AI across Europe. The clustering of human capital is very persistent over time and those that lag in human capital now will weigh down Europe’s ability in generating AI-related growth in the future. This will likely have long-lasting effects for Europe as a whole, defining its capacity to catch up to the global frontier or amplifying the region’s slow-moving growth and productivity trend. Policy makers who are serious about creating growth in the future on the back of AI should therefore invest in human capital now, or else Europe will further fall behind.
- Topic:
- European Union, Economic Growth, Human Capital, and Artificial Intelligence
- Political Geography:
- Europe
109. Skills or a degree? The rise of skills-based hiring for AI and green jobs
- Author:
- Fabian Stephany and Eugenia Gonzalez Ehlinger
- Publication Date:
- 12-2023
- Content Type:
- Working Paper
- Institution:
- Bruegel
- Abstract:
- For emerging professions, such as jobs in the field of artificial intelligence (AI) or sustainability (green), labour supply does not meet industry demand. In this scenario of labour shortages, our work aims to understand whether employers have started focusing on individual skills rather than on formal qualifications in their recruiting. By analysing a large time series dataset of around one million online job vacancies between 2019 and 2022 from the United Kingdom, and drawing on diverse literature on technological change and labour market signalling, we provide evidence that employers have started so-called ‘skill-based hiring’ for AI and green roles, as more flexible hiring practices allow them to increase the available talent pool. In our observation period the demand for AI roles grew twice as much as average labour demand. At the same time, the mention of university education for AI roles declined by 23 percent, while AI roles advertise five times as many skills as job postings on average. Our regression analysis also shows that university degrees no longer show an educational premium for AI roles, while for green positions the educational premium persists. In contrast, AI skills have a wage premium of 16 percent, similar to having a PhD (17 percent). Our work recommends making use of alternative skill building formats such as apprenticeships, on-the-job training, MOOCs (massive open online courses), vocational education and training, micro-certificates and online bootcamps to use human capital to its full potential and to tackle talent shortages.
- Topic:
- Education, Employment, Artificial Intelligence, Skilled Labor, and Green Jobs
- Political Geography:
- Europe
110. The competitive relationship between cloud computing and generative AI
- Author:
- Christophe Carugati
- Publication Date:
- 12-2023
- Content Type:
- Working Paper
- Institution:
- Bruegel
- Abstract:
- Cloud computing providers and generative artificial intelligence (GenAI) providers nurture a close, interdependent relationship: GenAI providers need cloud providers to train, run and deploy their GenAI solutions, while cloud providers see GenAI providers as a business driver to grow their market shares in cloud and related markets, such as productivity software or search engines. The cloud/GenAI relationship takes various forms, including exclusive and strategic partnerships, especially between large cloud providers and GenAI providers across all parts of the cloud market, including infrastructure, platforms and software. Competition benefits and risks are likely to result from the relationships. Competition benefits arise from increased competition and innovation in the cloud and GenAI sectors. Risks relate to potential concentrations arising from the partnerships between cloud and GenAI providers, and from anticompetitive practices, including discrimination in the supply of IT equipment by dominant IT providers, interoperability obstacles to switching, use of business-user data, self-preferencing of cloud services over third parties, tying and pure bundling. Merger control and antitrust laws can address some of the competition risks, while laws, including the European Union’s Digital Markets Act and Data Act, can deal with competition issues in digital markets and the cloud sector. Nevertheless there are gaps. The European Commission should amend existing EU instruments, including by changing the definition of a concentration under merger control, and should specify interoperability requirements for cloud providers under the Data Act. The Commission should also closely monitor developments in and outside Europe through market investigations, including with international counterparts, and should intervene to tackle imminent competition risks using fast procedural tools, such as interim measures.
- Topic:
- European Union, Antitrust Law, Artificial Intelligence, Cloud Computing, and Competition
- Political Geography:
- Europe and Global Focus