top of page

Exploring the Ethical Implications of AI Accountability


GGI
AI and an Accountability Conundrum

If you are interested in applying to GGI's Impact Fellowship program, you can access our application link here.


 

1.Abstract


“A computer would deserve to be called intelligent if it can deceive a human into believing it was human”, said Alan Turing on Artificial Intelligence (AI here on in). From his times to the present, AI has rapidly transitioned from a futuristic concept to a pervasive reality. With its cross-cutting industrial applications in health, defense, supermarkets, governance, applicability of AI has not left any corner untouched. The year 2022 specifically marks a tipping point in the history of Artificial Intelligence. The impact of this technological revolution is so massive that scholars see the world order moving from a geo-political to a 'geo-digital' one. Why have all major poles in the present post-Westphalian world order—the USA, the UK, China, EU, India—started conversations about the regulation of AI from 2022 onwards? Let us consider.


From recommender systems shaping online experiences to autonomous weapons deployed by governments and especially with the introduction of Generative AI and Large Language Models (LLMs) in 2022 to the larger public, the technology has reached an unprecedented scale. Things have begun to develop at a rapid pace and there are several reasons for this:


1. Advent of Big Data: First of all, the advent of the internet and the large-scale use of sensors generated unprecedented amounts of data – in the case of AI technologies, this was a significant development, as they are based on the analysis of a large number of examples.


2. Exponential increase in semiconductor chip and data processing technology: Secondly, the emergence of cloud-based services massively simplified and increased access to storage and computing power for businesses. This not only enabled complex calculations to be carried out using all those large quantities of data but also made it possible for applications to be upscaled without restriction.


3. Ease of development of intelligence models: Major technology companies are now offering smart application program interfaces (APIs). These make it possible to connect to standardized AI applications and make developing applications utilizing artificial intelligence much easier. For example, if facial recognition is needed for an app, an API can be used instead of developing a facial recognition algorithm for the individual app concerned.


4. Rise of Generative AI and Large Language Models: With the rise of easy-to-use and convenient Generative AI chatbots like Open AI’s ChatGPT and Google’s Bard, AI technology has reached the masses. It took just 2 months for ChatGPT to reach 100 million users. Such an exponential rise in the user base raises the threat of accountability for its ethical usage and regulatory framework.


With an unprecedented speed of AI Market Adoption, the possibility of an AI prediction going wrong or the scale of AI’s decision’s impact has also assumed unprecedented proportions. In fact, the negative use cases of AI have concerned us to the extent that world leaders came together for the 1st ever AI Safety Summit in Bletchley Park, London in November 2023. Major risks to public safety presented by ‘frontier AI’ or highly capable foundation models are at three levels - firstly the level of ‘societal risk’ which includes degradation of the information environment, Labour market disruption, secondly, ‘bias, fairness and representational risks’ and most importantly ‘misuse risks’ including dual use science risks, cyber risks and disinformation which can influence operations which include ‘loss of control’ scenarios where AI takes over from human control and starts acting independently


Since the technology is impacting society in both positive and potentially harmful ways, it is imperative to emphasize the need for a collective understanding of its use and implications and establishing accountability for its ethical development and safe use for the betterment of society. Though establishing accountability in itself is a challenge given the number of stakeholders present and the complexity of assigning responsibility - corporates, national governments, international organizations and civil society have to work towards building consensus on regulating and holding each other accountable through checks and balances approach.


It is in this context that our White Paper explores the very pertinent question of “Artificial Intelligence and the Accountability Conundrum” and makes recommendations for the future course of action for accountability purposes. For this, not only do we look at the multi-sectoral and multi-stakeholder negative use cases of AI but also deep dive into the present state of global regulations around Artificial Intelligence. The hope is that our study will lead to better understanding of the subject which will benefit future regulation as well as its ethical development.




2.Introduction to Artificial Intelligence


The term Artificial Intelligence was coined by John McCarthy of Massachusetts Institute of Technology, which is defined by Marvin Minsky as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as perceptual learning, memory organization, and critical reasoning” in the 1950s.The rapid advancement of AI technology since the 2010s has ushered in a new era where machines are integral to various aspects of our lives.


From AI-driven flight booking and airport monitoring to assisting pilots during flights, AI systems are increasingly involved in everyday activities. They also play a role in crucial decisions such as loan approvals, welfare eligibility, and job hiring, and even influence criminal justice by determining who gets released from jail. Governments are deploying AI in autonomous weapons and surveillance, while virtual assistants, self-driving cars, and scientific breakthroughs owe their progress to AI. Recommender systems shape our online experiences and can create media content, Generative AI can generate poems, and essays and make logical suggestions based on text or image-based inputs. While these applications generate positive outcomes at best, AI certainly has its pitfalls. To minimize the negative-use cases of AI, it is imperative to hold someone accountable for the complex ethical decision that an AI model takes.


To delve deeper into attaching accountability to stakeholders in the developmental journey of AI, we first ought to consider the basics of the technology to get a better grasp of it.



2.1Components of Artificial Intelligence





3.Increasing Market share and adoption of Artificial Intelligence


The global artificial intelligence market size is projected to expand at a compound annual growth rate (CAGR) of 37.3% from 2023 to 2030. It is projected to reach $1,811.8 billion by 2030. AI is expected to contribute $15.7 trillion to the global economy by 2030, more than the current output of China and India combined. Let us look at market adaptation of AI


● State of Global AI Adoption


The global enterprise adoption of AI has more than doubled since 2017 and is growing at an impressive rate showing signs of faring better in the upcoming years. According to an IBM survey report, Indian and Chinese companies are taking the lead in the use of AI compared to other technologically advanced countries.


● Chinese and Indian Companies Lead in AI Adoption


According to an IBM report, Chinese and Indian companies lead in AI adoption, with almost 60% of IT professionals saying their organization already uses AI applications. IT spending in AI is projected to see an annual growth rate of around 27% from 2021 to 2026.


● Generative AI Adaptation and Growth Trend


Generative AI has become the preferred tool for content production, particularly in the realms of natural language processing and image generation. This technology is transforming various industries, including marketing, content creation, and healthcare. Marketers and content creators leverage AI to write articles and generate images, enhancing both efficiency and creativity. Generative AI also powers personalized user experiences across different platforms, driven by the analysis of user behavior and preferences, leading to increased engagement and satisfaction in sectors such as e-commerce, entertainment, and social media. Additionally, generative AI is reshaping customer service with chatbots, improving game production in the gaming industry, and enhancing content recommendation systems, while also playing a vital role in data analysis and insights. Consequently, this trend is expected to drive the growth of the Generative AI Market.




4.Artificial Intelligence - A double edged sword


Though the development of AI could contribute up to $15.7 trillion to the global economy in 2030 it is a double-edged sword with its own set of negatives. Instances such as Microsoft's AI chatbot Tay posting offensive and inappropriate tweets in 2016 and Amazon’s recruitment AI programme being biassed against women applicants in 2018 have been a testimony to that fact. There are also numerous examples of governments using AI algorithms leading to negative outcomes, e.g. it was reported in 2019 that the London Metropolitan Police’s facial recognition technology had a false positive rate of 81%. A National Institute of Standards and Technology (NIST) study in the USA revealed that facial recognition software has higher rates of false positive identifications for people of color. This algorithmic bias can lead to wrongful arrests and unjust treatment by law enforcement agencies.


In such cases when the use of an Al algorithm leads to negative outcomes, a pertinent question that arises is how to attach accountability for its failure among the various stakeholders associated with the development and use of the said algorithm. Issues of responsibility, accountability, liability and acting with integrity seem to be of paramount importance when analyzing the ethical impact of non-human entities such as an AI algorithm taking decisions having the ability to impact human lives.


Attaching accountability is a very important issue within AI ethics, with a fear that companies or governments will try to obfuscate blame and responsibility onto the autonomous or semi-autonomous systems. Use of AI algorithms also creates a “responsibility gap”, whereby it is unclear who is responsible when something goes wrong.


This leads to what we call the accountability conundrum in this paper. Users and consumers of AI models face challenges when models perform poorly, lacking insight into reasons such as biased training data or suboptimal choices of model parameters. The lack of transparency leaves consumers with limited control or understanding, prompting a need for more visibility and trust in models.


To understand and find solutions to the problem of attaching accountability one needs a look into various stakeholders in AI’s development and usage journey. We define these stakeholders in the subsequent sections and analyze their individual roles, responsibilities and the challenges faced by them in using, developing and regulating AI applications safely.




5.Stakeholders Analysis: Attaching Accountability


A stakeholder is an individual who has some investment in AI, either in the form of direct support for research and development or a vested interest in the success of the AI. For the purposes of simplification, this paper identifies three levels of potentially relevant stakeholders when responsible AI systems are considered: model developers, model users and regulators (national / international stakeholders) engaged in making laws, rules, and regulations. We look at the negative use cases of AI from the perspective of these three major categories of stakeholders in AI’s journey.



5.1Model Developers :


AI developers are the people responsible for building intelligent systems capable of performing tasks that traditionally require human intelligence. They design, develop and deploy AI-powered solutions. They are at the forefront of the AI revolution and are key stakeholders in developing safe AI solutions by maintaining high standards of transparency and incorporating the virtues of fairness, explainability, non-discrimination, privacy and data protection. Model developers include - software engineers, data scientists, AI researchers, machine learning engineers etc.


From a developer's perspective, creating AI-powered software involves several stages. It begins with data collection and preprocessing, a stage where relevant datasets are gathered and cleaned to ensure accuracy and consistency. Later comes model selection and training, etc. Lastly, performance evaluation assesses accuracy and effectiveness. It is essential to address prejudice and ethics across the whole AI development process — from data collection to deployment — to prevent biassed or unethical AI. Developers should keep following things in mind while designing an AI tool:


● Eliminating biases by involving truly diverse teams from ethnicity, gender, socioeconomic status and educational background to ensure that the multiple perspectives were incorporated in developing and testing AI algorithms.


● Choosing data sources carefully and performing data augmentation to get varied data sets, analyzing and reviewing algorithms to correct biases over time.


● Analyzing disparate impact, which assesses the differential impact of AI systems on different groups. Conducting fairness audits and using interpretability tools can provide insights into decision-making processes and help uncover hidden biases.


● Fairness measures can minimize AI bias by assessing and evaluating an algorithm’s fairness and spotting potential biases. A fairness score determines how the algorithm performs for various ethnic or gender groups and highlights any discrepancies in results.


● Transparency in AI can be achieved by developers using model interpretation, which involves visualizing the internal workings of an AI system to comprehend how it arrived at a specific decision. Another technique is counterfactual analysis, which involves testing hypothetical scenarios to grasp how an AI system would respond. These techniques enable humans to comprehend how an AI system arrived at a specific decision, and detect and rectify biases or errors.


Finally, developers need to ensure the security and fairness of AI systems through systems of accountability. This involves establishing distinct lines of accountability for AI decision-making and holding developers and users liable for any adverse effects. For instance, the European Union’s General Data Protection Regulation (GDPR) — which provides for legal repercussions for non-compliance — requires that businesses put safeguards in place to ensure the transparency and equality of AI algorithms.


Let us look at a case study relating to developers:


AI Development gone wrong - An AI model thinks a turtle is a gun. Google developed a neural network to identify everyday objects and it thought a turtle looked like a rifle. This was an example of an ‘adversarial image’ which are photos engineered to trick machine vision software. Adversarial images exploit loopholes in the development of an AI algorithm or neural network and exploit them. One can theoretically make adversarial glasses that trick facial recognition systems or even add adversarial layers on top of images to fool AI systems. Issues like these reveal how fragile AI systems can be and unless rectified by developers, can lead to havoc once facial recognition and machine vision is more commonly deployed in our society.



5.2Model Users:


Users of artificial intelligence algorithms span diverse sectors, including corporations utilizing AI for automation and analytics, governments employing AI in public services, and individuals leveraging AI-driven applications for personal tasks and entertainment.


Users of these AI systems include a wide variety of people and organizations. They may or may not be technically equipped to understand the working of the AI system that they are using. A user should be aware and keep in mind these considerations while using AI systems for personal or organizational productivity


● They have to be aware of intellectual property rights of the generative content that they are generating and or utilising, and use the information giving proper credits to the property owner.


● Individual users of AI systems are susceptible to both positive and negative emotional influence. An effective and influential AI system has the potential to influence how people view society itself. Users should be adept to understand and recognise when they are being influenced.


● Affective AI is also open to the possibility of deceiving and coercing its users – researchers have defined the act of AI subtly modifying behavior as 'nudging', when an AI emotionally manipulates and influences its user through the affective system. Users should be educated to differentiate nudges.


● Users should be aware of the algorithmic updates incorporated in the system and should keep updating themselves with the advances in the technology as a whole. AI Development gone wrong - An AI model thinks a turtle is a gun. Google developed a neural network to identify everyday objects and it thought a turtle looked like a rifle. This was an example of an ‘adversarial image’ which are photos engineered to trick machine vision software. Adversarial images exploit loopholes in the development of an AI algorithm or neural network and exploit them. One can theoretically make adversarial glasses that trick facial recognition systems or even add adversarial layers on top of images to fool AI systems. Issues like these reveal how fragile AI systems can be and unless rectified by developers, can lead to havoc once facial recognition and machine vision is more commonly deployed in our society.


● There are various ways in which AI could inflict emotional harm, including false intimacy, overattachment, objectification and commodification of the body, and social or sexual isolation. These implications have to be understood by the users, especially the vulnerable population.


● Private and public organizations utilizing AI systems for enhancing productivity of its employees or the larger public have to incorporate AI systems in line with the norms and values specific to the region or people they are engaging with, and the sensitivities of the culture in which they are operating.


● AI systems can sometimes hallucinate and provide false information about certain queries. Users have to be aware of this phenomenon and should proactively verify the information they are obtaining out of the system before using or publishing in the public space.



Model User 1: AI and the Government - A case of biases in Law Enforcement Use of the software Compas (made by a privately held company) as a risk assessment tool in the US criminal justice system. The score generated by the tool is used by a Judge to decide the sentencing of the convicted. Artificial intelligence has become a pivotal tool in law enforcement globally, aiding in crime prediction and prevention. Advanced algorithms analyze vast datasets to identify patterns, helping law enforcement agencies anticipate criminal activities. The integration of artificial intelligence (AI) algorithms (e.g. in Compas’s risk-assessment tool to predict) in the US criminal justice system, specifically in determining a defendant's risk of committing another crime, has raised significant concerns about oversight and transparency. The lack of proper safeguards and the absence of federal laws establishing standards or requiring inspection contribute to the potential erosion of the rule of law and individual rights. One notable case illustrating these concerns is that of defendant Eric Loomis, who received a lengthy sentence based on a "high risk" score generated by the Compas risk-assessment tool. The troubling aspect is that Compas operates as a black-box system, meaning neither the judge nor any other party involved is privy to how the tool arrives at its decisions. This lack of transparency poses a fundamental challenge to the fairness and accountability of the criminal justice process. ProPublica's investigation into the use of AI in criminal screening further highlights alarming disparities in the accuracy of risk assessments provided by Compas, with the tool being twice as likely to make errors with Black individuals compared to their white counterparts. This revelation underscores a pervasive issue of bias in the software, suggesting that these AI systems are not only opaque but also potentially discriminatory in their outcomes. In the absence of clear standards and oversight, the use of such biased AI tools in the criminal justice system has far-reaching implications. The risk of reinforcing and perpetuating systemic inequalities becomes evident, as these technologies wield significant influence over crucial decisions related to bail, sentencing, and parole. Urgent attention is required to establish comprehensive regulations and transparency measures to ensure the responsible and fair deployment of AI in the legal realm, safeguarding the principles of justice and protecting individual rights.

Model User 2: AI and corporates - An imperfect hiring technology Artificial intelligence is revolutionizing Human Resource management by streamlining recruitment processes through automated resume screening and candidate matching. HR analytics powered by AI enables data-driven decision-making, predicting employee performance, and identifying areas for improvement. Amazon's foray into using artificial intelligence (AI) for recruitment took a turn when the company discovered biases in its system. The tech giant had been developing computer programs since 2014 to automate the screening of job applicants' resumes, aiming to streamline the search for top talent. However, by 2015, it became apparent that the new system was not evaluating candidates for roles like software developers and other technical positions in a gender-neutral manner. The root of the issue lay in the fact that Amazon's computer models had been trained on resumes submitted over a 10-year period, predominantly from male applicants. This training data reflected the male dominance prevalent in the tech industry. As a result, the AI recruitment tool exhibited biases against women, raising concerns about fairness and gender equality in the hiring process. The American Civil Liberties Union (ACLU) has taken notice of such instances and is actively challenging laws that permit the criminal prosecution of researchers and journalists who test hiring websites' algorithms for discrimination. Algorithmic fairness has become a growing focus, with organizations like the ACLU advocating for transparency and accountability in the use of AI in hiring practices. Despite these efforts, challenges persist in addressing bias in automated hiring. Critics, including Rachel Goodman, a staff attorney with the Racial Justice Program at the ACLU, acknowledge the difficulty in pursuing legal action against employers for discriminatory AI practices. A notable obstacle is the lack of transparency, where job candidates may remain unaware that automated systems are being used in the hiring process. This opacity complicates efforts to hold organizations accountable for biases in AI-driven recruitment tools.



5.3Model Regulators


The concept of Regulation is intrinsic to the success of any new invention and its application for the larger good of the society. Unrestricted development, deployment and application of Artificial inequalities becomes evident, as these technologies wield significant influence over crucial decisions related to bail, sentencing, and parole. Urgent attention is required to establish comprehensive regulations and transparency measures to ensure the responsible and fair deployment of AI in the legal realm, safeguarding the principles of justice and protecting individual rights. Model User 2: AI and corporates - An imperfect hiring technology Artificial intelligence is revolutionizing Human Resource management by streamlining recruitment processes through automated resume screening and candidate matching. HR analytics powered by AI enables data-driven decision-making, predicting employee performance, and identifying areas for improvement. Amazon's foray into using artificial intelligence (AI) for recruitment took a turn when the company discovered biases in its system. The tech giant had been developing computer programs since 2014 to automate the screening of job applicants' resumes, aiming to streamline the search for top talent. However, by 2015, it became apparent that the new system was not evaluating candidates for roles like software developers and other technical positions in a gender-neutral manner. The root of the issue lay in the fact that Amazon's computer models had been trained on resumes submitted over a 10-year period, predominantly from male applicants. This training data reflected the male dominance prevalent in the tech industry. As a result, the AI recruitment tool exhibited biases against women, raising concerns about fairness and gender equality in the hiring process. The American Civil Liberties Union (ACLU) has taken notice of such instances and is actively challenging laws that permit the criminal prosecution of researchers and journalists who test hiring websites' algorithms for discrimination. Algorithmic fairness has become a growing focus, with organizations like the ACLU advocating for transparency and accountability in the use of AI in hiring practices. Despite these efforts, challenges persist in addressing bias in automated hiring. Critics, including Rachel Goodman, a staff attorney with the Racial Justice Program at the ACLU, acknowledge the difficulty in pursuing legal action against employers for discriminatory AI practices. A notable obstacle is the lack of transparency, where job candidates may remain unaware that automated systems are being used in the hiring process. This opacity complicates efforts to hold organizations accountable for biases in AI-driven recruitment tools. Intelligence, will be in favor of the big fishes, putting the unaware and unskilled users at high risk. It shall hurt their liberty, exemplify inequality, thereby undermining the cause of Justice for all. For instance, when an AI Algorithm filters out potential interviewees based on their racial background, a particular section loses on their fair chance of employment due to AI bias. If such effects compound in the absence of attaching accountability for them, the society we see in the future will certainly be an unequal, divided and a polarized one. Regulation therefore intends to prevent the misuse or exploitation of AI, intentionally or unintentionally, which can have multiple serious ramifications given the scale of AI Applications.


The Regulators of Artificial Intelligence (AI) involves a combination of governmental bodies, international organizations, and industry-specific entities. At the national level, countries often have regulatory agencies or departments responsible for overseeing aspects of AI development and deployment. For example, in the United States, the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST) play roles in regulating AI applications. Internationally, organizations like the European Union (EU) and the Organisation for Economic Co-operation and Development (OECD) contribute to shaping AI governance frameworks. Additionally, industry-specific regulations may exist, such as those in the healthcare or financial sectors, which are tailored to address the unique challenges and ethical considerations associated with AI in those domains. The regulatory landscape for AI is evolving, with ongoing efforts to establish standardized principles, ethical guidelines, and legal frameworks to ensure responsible and accountable AI development and use on a global scale.


Let us consider a case study:



Model Regulators : AI and Disinformation - Regulating social media now empowered with AI generated content. The emergence of deep fakes, exemplified by the creation of a fictional speech by Belgian Prime Minister Sophie Wilmès in 2020 by the activist group Extinction Rebellion, poses a growing threat to democratic processes and societal cohesion. Deepfakes, utilizing advanced techniques such as deep learning or machine learning to generate realistic human faces and bodies in videos, are increasingly becoming tools for targeted disinformation campaigns. This trend, outlined in a Wired article, suggests that the manipulation of audiovisual content could contribute to misinformation, amplifying the challenges faced by democratic societies. Adding to the concerns surrounding disinformation are online bots capable of generating fake texts, including altered news articles and tweets, further complicating efforts to discern genuine information from manipulated content. The link provided highlights the significance of deep fakes and "cheap fakes," which employ conventional techniques like speeding, slowing, cutting, and re-contextualizing footage. The broader context of audiovisual (AV) manipulation, encompassing both advanced deepfake technologies and simpler manipulation techniques is a regulatory challenge. Regulators face the challenge of implementing measures which retain the beneficial use of this technology while preventing its malicious exploitation.

All major negative use cases of Artificial intelligence can be attributed to lack of adequate regulations and laws around the development, deployment and applications of AI. This necessitates that AI as a technology becomes regulated. Akin to the regulation of other global threats of the present times - environment, terrorism, cybercrimes and so on - regulation for AI can also be done at two levels - National and Global. At the National Level, 31 countries have passed AI legislation and 13 more are debating AI laws (CSIS Data). Law making as a way of regulating this sector however is yet to pick up pace, given the rapidly evolving nature of AI and its applications. Thus, domain regulators have a larger responsibility to ensure fair application of AI, in setting industry standards for AI in their respective domains, till laws for holistic regulation of AI are built.


At the Global Level, striking a consensus is an even more challenging task. However, akin to global standards on the Minimum Corporate Tax under the OECD, or the Paris Agreement under UNFCCC, steps are being taken to set uniform standards to regulate AI. This is because AI is global in its impact and scale. The recent Bletchley Park Declaration (London Summit) as well as Global Partnership for AI (GPAI) forums are examples of steps in that direction. For instance, the London Summit has proposed an International register for frontier AI models that will enable the governments to assess the risks involved. This is because while in the past, frontier technology development, in nuclear and space, was led by governments, today AI development outside China is in the hands of digital corporations. Furthermore, since these digital corporations are multinational in a post-Westphalian Geo-Digital Age, using datasets from cross-cutting national jurisdictions and being global in their impact, regulations by nation-states in silos may not effectively solve for accountable AI.


Given the above considerations, Regulators play the most important stakeholders in the AI development and deployment journey. For forwarding policy recommendations to better deal with the AI Accountability Conundrums, we now delve into the state of global AI regulations - a literature review of how major technology superpower jurisdictions are planning to regulate AI.




6.State of Global AI regulation - Who is to be held accountable?


The landscape of emerging regulation is varied and dynamic. In several countries, including the US, Europe, and the UK, regulatory laws are in developmental stages. The US follows a voluntary approach with an AI risk management framework, emphasizing safety, security, privacy, equity, and civil rights, complemented by a White House-released AI Bill of Rights. The EU’s approach categorizes AI systems by risk, mandating assessment and reporting for high-risk systems. The UK is encouraging sector-specific regulatory measures by existing authorities. China stands out as one of the few nations with finalized laws enhancing security around GenAI and establishing oversight agencies.



6.1 USA


In the United States, the regulation of AI is currently a complex and multifaceted landscape involving various actors. Congress is advocating for preemptive legislation to establish regulatory guidelines for AI products and services, focusing on user transparency, government reporting, and alignment with American values. However, the vagueness of the proposal raises concerns about its effectiveness. Simultaneously, the Biden Administration is witnessing competition among federal agencies to implement a general blueprint for an AI Bill of Rights, emphasizing the development of "safe and effective" systems without clearly defining crucial terms. Efforts by the Department of Commerce, specifically the National Telecommunications and Information Administration (NTIA), stand out for their inquiry into the usefulness of audits and certifications for AI systems, demonstrating a more specific approach.


On the state level, AI-related legislation has emerged in at least 17 states, with proposals varying from incentivizing local development of AI products to limiting its use in specific applications like healthcare and hiring. However, the lack of specificity in many of these proposals and the need for additional legal authority from Congress raise questions about the immediate impact of these regulations on AI development. The regulatory landscape is further complicated by recent Supreme Court decisions shifting power from federal regulators to the courts and states, adding fragmentation and uncertainty to enforcement actions, while the rapidly evolving nature of technology continues to outpace regulatory efforts.


The recently issued Executive Order in October 2023, released a few days ahead of the AI Safety Summit in London, by the President of the USA signifies a holistic approach to tackle the multifaceted implications of artificial intelligence (AI) on Americans' safety, security, privacy, and civil rights. The order mandates transparency in the development of powerful AI systems, requiring companies to share safety test results with the government, particularly for models posing risks to national security and public health. It also calls for Congress to enact bipartisan data privacy legislation and emphasizes federal support for privacy-preserving techniques. To counteract discrimination, bias, and injustices, the order provides clear guidance on preventing AI algorithms from exacerbating discrimination and promotes best practices in criminal justice. Furthermore, it supports responsible AI use in healthcare, drug development, and education while addressing workers' rights, fostering innovation, and encouraging international collaboration to ensure responsible governance of AI. Overall, the Executive Order outlines a comprehensive strategy that prioritizes safety, security, and ethical governance in the development and deployment of AI technologies both domestically and internationally.


Implementing the Executive Order (EO) faces hurdles, particularly in swiftly recruiting AI experts who demand competitive salaries from the private sector. The challenge lies in relying on executive authorities for AI regulation, given the inability of the US Congress to legislate effectively on potential harms arising from the adoption of advanced digital technologies.



6.2 European Union


For competition law, and its application to technology companies in particular, the momentum over the last few decades has already relocated from the U.S. to Europe. As the EU continues to pass substantial new internet legislation, Congress dithers, leaving the FTC and other federal agencies largely without the tools or resources to compete with their European counterparts.


In Europe, the General Data Protection Regulation (GDPR) provided a foundational framework with implications for AI systems, particularly in terms of data protection and privacy. The European Union (EU) was also actively exploring additional regulations specific to AI, aiming to balance innovation with the protection of fundamental rights. The European Commission released its proposal for the Regulation on Artificial Intelligence in April 2021, outlining a risk-based approach and emphasizing high-risk AI applications. The proposal aimed to establish clear rules for the development and use of AI, fostering trust and promoting human-centric AI systems within the EU.


In June 2023, the European Parliament approved the EU Al Act, the first of its kind in the world. The Act ensures that generative Al tools such as ChatGPT will be placed under greater restrictions and scrutiny. Developers will have to submit their systems for review and approval before releasing them commercially. Parliament also prohibited real-time biometric surveillance from all Public settings and "social scoring" systems. The Act classifies AI applications into three main categories: banned practices, high-risk systems, and other AI systems. Banned practices involve using AI for subliminal manipulation or exploiting vulnerabilities that could cause harm, employing real-time remote biometric identification in public spaces for law enforcement, or utilizing AI-derived 'social scores' to unfairly disadvantage individuals or groups. High-risk systems, which pose significant threats to health, safety, or fundamental rights, require a compulsory conformity assessment before being launched in the market. The AI Act also creates the creation of a European Artificial Intelligence Board to facilitate national cooperation and ensure compliance with the regulation.


Similar to the European Union's General Data Protection Regulation, the AI Act could potentially become a global standard. It has already influenced legislative developments beyond Europe, as seen in Brazil's Congress passing a bill in September 2021 to create a legal framework for artificial intelligence. Among the potential measures to be proposed is a requirement for AI developers, including those working on products like Open AI's ChatGPT, to declare whether copyrighted material was used in training their technology. This issue has become salient with the recent case of the New York Times suing Open AI and Microsoft for using its copyrighted publications to train Chat GPT without their consent.




6.3 China


China is currently implementing comprehensive regulations on artificial intelligence (AI), marking a significant development in the global AI governance landscape. These regulations cover various aspects, including recommendation algorithms, synthetically generated images, and chatbots, resembling technologies like ChatGPT. The 2021 regulation focuses on recommendation algorithms, aiming to prevent excessive price discrimination and safeguard the rights of workers under algorithmic scheduling. In 2022, rules for deep synthesis mandate conspicuous labels on synthetically generated content, while the 2023 draft rules on generative AI prioritize information control and demand both training data and model outputs to be "true and accurate." All three regulations necessitate developers to file information with China's algorithm registry, a new government repository collecting data on algorithm training and enforcing a security self-assessment.


China's evolving AI governance framework is poised to influence the development and deployment of AI technology domestically and globally. With an emphasis on information control, these regulations not only impact Chinese technology exports but also have implications for international AI research networks. Regulators are moving quickly, both to incentivize home-grown AI products and services and to define how they can and cannot operate. Not only could this limit how non-Chinese companies interact with over a billion potential Chinese users, but could, by being first, become the de facto legal regime for future applications. The stringent requirements, such as ensuring accuracy in generative AI outputs, could pose challenges for AI developers, shaping the future landscape of AI innovation and deployment within and beyond China's borders.




7.Recommendations for various stakeholders


In recent years there has been an intense competition between AI developers to build products quickly. Competition on AI has raised concern about potential “race to the bottom” scenarios, where actors compete to develop AI systems and under-invest in safety measures. Laissez faire AI regulation by significant world economies often enable these scenarios since AI is being seen as a tool of geo-digital superiority. The technology has been touted to be as transformative as the First Industrial Revolution of 18th and 19th century Europe. The French President Macron recently expressed concern about the EU's new AI Act hampering European innovation. In such scenarios, it could be challenging for AI developers to commit unilaterally to stringent safety standards, lest their commitments put them at a competitive disadvantage. The risks from this “race” dynamic between countries will be exacerbated if it is technologically feasible to maintain or even accelerate the recent rapid pace of AI progress.


It is in this context that we suggest the following recommendations-


1. Regulators and policymakers must collaborate by engaging with industry stakeholders, technology experts, and consumer advocacy groups. This collaboration ensures that regulations are well-informed, practical, and adaptable to the fast-paced advancements in AI and ML.


2. Algorithmic investigations and audits have emerged as a potent tool for regulators in AI governance, revealing flaws like inaccuracy and discrimination. Scientific research supports the efficacy of audits, and they may become a prevalent approach in AI regulation. The EU's AI Act emphasizes regulators' authority to demand information on high-risk algorithmic systems for compliance assessment, and various regulators worldwide have initiated algorithmic audits.


3. Another approach could be to develop regulatory sandboxes. An AI regulatory sandbox serves as a collaborative platform between regulators and AI developers, aiming to enhance communication, streamline regulatory compliance, and provide legal certainty to companies while deepening regulators' understanding of AI system design, development, and deployment.


4. A novel approach to eliminate the worst outcomes of AI could be the development of an AI assurance industry. AI assurance encompasses a diverse array of technology companies, such as Weights & Biases, Babl AI, and Trustible, specializing in monitoring, evaluating, and ensuring legal compliance of algorithmic systems. These companies offer varied services, including bespoke software aiding algorithmic development, documentation of data and models for regulatory compliance, and full algorithmic audits with compliance services.


5. Regulators should actively encourage information from affected individuals and whistleblowers within AI developers, both of whom can offer unique insights into algorithmic systems. Developers themselves, particularly data scientists and machine-learning engineers, possess intimate knowledge of algorithmic systems and are well-placed to understand their societal impact, harms, and legal violations. Examples like Frances Haugen exposing Facebook's internal documents and Peter Zatko's revelations about Twitter underscore the importance of whistleblowers in shedding light on algorithmic issues.


6. Regulators should actively consider what steps are necessary and valuable in their domains to ensure their regulatory mission is preserved. This includes cataloging and observing emerging uses of algorithmic systems in their field, exploring what their existing statutory authority allows for, and hiring staff with expertise in algorithmic systems. Regulators may benefit from a gap analysis—identifying where current authorities and capacities are lacking so that they can inform legislators, who are far less likely to understand the nuances of every regulatory subfield.


INDIAN APPROACH FOR AI REGULATION - PRESENT AND POTENTIAL


We bring the Indian context into AI development and regulation for two key reasons:


● Firstly, in line with the suggestions of Indian policy professionals, the country possesses a unique last-mover's advantage in AI development and regulation. This implies that India can seamlessly integrate concerns for accountability within its AI ecosystem's development, setting it apart from other global giants that address accountability concerns separately.




● The second reason for discussing India is its potential impact on global AI regulation. India’s model for an AI regulatory ecosystem could serve as an example for other developing countries facing similar or even greater constraints. Considering India's status as the country with the world's largest population, marked by its diversity in all aspects, a regulatory approach tailored to the needs of its society and economy could provide elements applicable to all major global contexts. This approach would make AI regulation truly global, aligning with India’s vision for #AIforALL.


India is situated in a unique context with certain foundational issues to be worked upon in its trajectory of AI development and regulation. With a lower spending on research and development (0.6% of the GDP); low computational and processing capacity, limited data availability and a lack of skilled workforce, AI Development has remained slow. With respect to regulation framework of AI.


India faces the following unique challenges on integrating its cultural and economic context into its AI trajectory. India's economic development stage differs from that of the EU and the U.S., necessitating the identification of specific negative consequences of AI and the development of targeted regulations. Additionally, India's cultural context is distinct, emphasizing the importance of aligning regulations with the country's cultural identity and values. Other challenges include factoring in for the dynamic nature of AI, and developing algorithmic auditing capabilities.


Accordingly, the way forward involves a cautious, context-specific approach that emphasizes the development of regulatory capabilities, encourages self-regulation by businesses developing AI-powered tools, and promotes the indigenous development of AI models since using India's own datasets can help in handling biases and aligning AI systems with the country's cultural values. This approach supports industrial and national outcomes while minimizing potential risks. However, the government should adopt a cautious approach, avoiding overly restrictive regulations that stifle growth while addressing the risks associated with AI.




Meet The Thought Leader



Subham Rajgaria is currently pursuing his MBA at Harvard Business School. Previously he has worked at WestBridge Capital, a $8Bn crossover fund where he was part of Consumer and SaaS investing teams. Before that he worked as Business Analyst at McKinsey & Co. He completed his graduation in Computer Science and Engineering from IIT Kharagpur. 









Meet The Authors (GGI Fellows)



Bhumika Nebhnani, hailing from Jaipur, Rajasthan, is currently pursuing a Master's in Political Science from the Faculty of Social Sciences at the University of Delhi. A university Gold Medalist and alumna of Miranda House, she has also served as the Chairperson of the SDG Council Global Youth India.Her areas of interest encompass International Relations, Public Policy, Gender Equality, and the emerging field of AI Ethics.






Manvi Katwal based out of small town Hamirpur in Himachal Pradesh is working as a consultant  for Deloitte with 3 years of work experience across FMCG, Ministry of agriculture etc.

She has done her b. tech from NIT Hamirpur. She wants to diversify her experience and want to contribute in the public policy domain in the coming future.








Baibhav Patel, based in Bhubaneswar, is a dedicated advocate for community empowerment with 5 years of experience in the non-profit sector. He holds a B.A. in International Development from the University of East Anglia. With a diverse background, he has actively contributed to projects on climate change adaptation, disabled women's economic empowerment, and impactful content creation. Baibhav is also a co-founder of Project Raahat, which addressed COVID-19 challenges, and Al Zaffran, a successful gourmet cloud kitchen.




Guruprasad Joshi, Based out of Banglore, Karnataka. Working as a Senior Associate at a travel tech start-up “headout” in their operations and strategy division. Has previously worked with Flipkart as an Assistant Manager, where he held responsibilities of cost reduction and end-to-end supply chain planning. In his free time he loves reading books on self-improvement, business and politics, exploring new places through travel and listening to podcasts. Curious about the start-up ecosystem and a believer in the impact of entrepreneurship on society, he wants to leverage his skills by contributing to building products and services that solve problems of modern society.



If you are interested in applying to GGI's Impact Fellowship program, you can access our application link here.


 

An in-depth examination of India's AI and the Accountability Conundrum is a topic we aim to delve into in a future paper.


BIBLIOGRAPHY:




















19. https://fintech.global/2023/12/19/the-role-of-regulation-for-ai-as-it-changes-risk-and-compliance /#:~:text=Many%20are%20aligning%20their%20AI,focusing%20on%20transparency%20and% 20explainability.




























APPENDIX:


1. Different types of Artificial Intelligence


● Artificial Narrow Intelligence (ANI)


Artificial Narrow Intelligence is also referred to as weak AI. A weak AI system is designed to complete a specific task. For example, an AI system built for natural language processing, playing chess, language translation, facial recognition, speech recognition, etc., is considered Artificial Narrow Intelligence. ANI systems are only programmed to complete a single task. While the completion of this task might be impressive, it is only adept at a particular task, which is a far cry from the AI models we see in science fiction books and movies. Every instance of Artificial Intelligence you have ever interacted with or heard about, from Alexa and Siri to ChatGPT, is an example of Artificial Narrow Intelligence.


● Artificial General Intelligence (AGI)


Artificial General Intelligence is also referred to as strong AI. A strong AI system can accomplish any intellectual task that a human can. Strong AI systems are merely hypothetical. No Artificial Intelligence has ever exhibited the capacity to match a human’s intelligence and problem-solving skills. Data scientists and software engineers are working towards building this type of AI system. However, there is real debate among researchers and scientists if this type of AI software is even possible to create.


● Artificial Superintelligence (ASI)


If Artificial General Intelligence was merely hypothetical, ASI systems are even more so. In theory, Artificial Superintelligence would surpass human intelligence in all possible aspects. This type of AI is most often depicted in science fiction, but if achieving general intelligence has proven impossible to this point, achieving superintelligence is far beyond the realm of our current technical capabilities.


2. Benefits of new age AI systems


New-age technologies bring in a lot of positive impact on society and the lives of people. We got to witness this with the internet and subsequently with social media. Hence acknowledging the positives of the technology becomes important. Here are some ways of how AI is going to change how we live our lives:


● Autonomous Vehicles and Aircraft: AI and IoT technologies are driving the development of autonomous vehicles and drones for surveillance and delivery services.


● Digital Assistants: Voice-controlled digital assistants, powered by AI, are gaining popularity, offering more advanced capabilities including applications in healthcare.


● Food Ordering Sites: AI algorithms in food delivery apps personalise recommendations and streamline the ordering process through conversational commands.


● Music and Media Streaming Services: Streaming platforms employ AI to provide personalised content recommendations and ensure a smooth user experience.


● Plagiarism Detection: AI tools analyse extensive data to detect plagiarism, even checking sources in various formats and languages.


● Banking: AI improves financial operations, aiding in investment decisions and speeding up loan and credit card processing.


● Credit and Fraud Prevention: AI and machine learning algorithms quickly process transactions and detect potentially fraudulent activities, enhancing security.


● Reduction in human error: One of the biggest benefits of Artificial Intelligence is that it can significantly reduce errors and increase accuracy and precision.


● Faster Decision-making: Faster decision-making is another benefit of AI. By automating certain tasks and providing real-time insights, AI can help organisations make faster and more informed decisions.


● Enhanced productivity : There are many studies that show humans are productive only about 3 to 4 hours in a day. Humans also need breaks and time offs to balance their work life and personal life. But AI can work endlessly without breaks.


3. Methodology


For our recommendations on making AI more accountable, we have used secondary data as our source. After analysing multiple negative use cases, we have narrowed down our analysis to three stakeholders, viz. AI users, AI developers and AI regulators for the purposes of simplification in our paper. For each stakeholder in AI’s developmental trajectory, we analyse their meaning, their roles and responsibilities and present a few case studies to highlight how one stakeholder contributes to an AI model which can create risks for society. While most case studies present multiple stakeholders at fault and thus create an accountability conundrum, we choose to highlight the dominantly responsible stakeholder in the cases we analyse. Finally, we look at how the major nation-states individually and collectively shape the global landscape on AI regulation to solve the accountability conundrum.


Based on an inductive study of both - the stakeholder analysis and the state of global regulations on AI - we suggest measures pertaining to each stakeholder as well as generic ones for any nation-state or global consensus on AI to consider.


At the same time, we are aware of the shortcoming of enforceability of rules by global conventions (the Paris Agreement under UNFCCC being a case in point), and see our stakeholder specific recommendations being of greater utility for either self-regulation by stakeholders or for policy formulation by nation-states.


This is in context of our theoretical understanding of the difference between domestic and international environments based on the ordering principles highlighted by the Neo-Realist Scholar Kenneth Waltz. While domestic ordering is based on the principle of hierarchy with nation-states enforcing the rules, the international ordering is anarchic and lacks enforceability. However, our recommendations could still be useful in defining a legally-binding global standard on AI Regulation (a model like CITES framework for environment), modifiable by respective nation-states to suit their respective contexts.





253 views0 comments
bottom of page