For this article:

4 Mar 2026·Source: The Indian Express
4 min
Science & TechnologyPolity & GovernanceSocial IssuesEDITORIAL

Human Agency is Key to Building Trust in Artificial Intelligence Systems

For AI to be truly trustworthy, it must be designed with human oversight and ethical considerations at its core.

UPSC-MainsUPSC-Prelims
Human Agency is Key to Building Trust in Artificial Intelligence Systems

Photo by Satyajeet Mazumdar

To ensure people trust Artificial Intelligence, it needs to be designed and controlled by humans, following strong ethical rules. This way, AI acts as a helpful tool that reflects our values and doesn't cause harm or unfairness.

Embedding human agency and robust ethical principles is paramount for the responsible development and deployment of Artificial Intelligence (AI) systems, as highlighted by recent discussions on technological governance. Without direct human oversight, clear accountability mechanisms, and a steadfast focus on core human values, AI systems inherently risk eroding public trust and exacerbating existing societal biases. The prevailing argument advocates for a comprehensive framework where AI is designed to serve humanity's best interests, rather than operating autonomously without moral or ethical guidance.

This approach emphasizes that human intervention is critical at every stage of the AI lifecycle—from design and data curation to deployment and monitoring. It seeks to ensure that AI technologies are transparent, fair, and accountable, thereby preventing unintended consequences and promoting equitable outcomes. The integration of human agency acts as a crucial safeguard against algorithmic discrimination and the potential for AI to make decisions that conflict with societal norms or individual rights.

For India, a nation rapidly embracing digital transformation and AI integration across sectors, this perspective is particularly vital. Prioritizing human agency in AI development aligns with India's democratic values and its commitment to inclusive growth, ensuring that technological advancements benefit all citizens. This topic is highly relevant for the UPSC Civil Services Examination, particularly under GS Paper 3 (Science and Technology, especially developments in AI and their applications) and GS Paper 4 (Ethics, Integrity, and Aptitude, focusing on ethical dilemmas in technology and governance).

Editorial Analysis

The author strongly advocates for embedding human agency and ethical principles at the core of Artificial Intelligence development and deployment. This perspective is rooted in the belief that without robust human oversight and accountability, AI systems risk undermining public trust and amplifying existing societal biases, ultimately failing to serve humanity effectively.

Main Arguments:

  1. Human agency is fundamental for building trust in AI systems, as AI's potential to erode public trust and exacerbate societal biases necessitates human oversight, accountability, and a focus on human values.
  2. AI should function as a tool serving humanity, rather than an autonomous entity devoid of moral guidance, requiring its design and operation to be anchored in ethical principles and human control.
  3. Ancient wisdom traditions, such as the Mahabharata, Ramayana, and Quran, offer conceptual models for integrating human agency and moral compass into complex systems, providing a philosophical foundation for responsible AI development.
  4. A robust ethical framework and legislative measures are crucial for governing AI, with examples like the UN Secretary-General's call for a global digital compact and the pioneering EU AI Act demonstrating international efforts towards responsible AI.
  5. India possesses a unique opportunity to lead in human-centric AI governance, leveraging its technological capabilities and philosophical heritage to champion an approach that prioritizes moral accountability and value-driven AI.
  6. The proposed MANAV model (Moral Accountability, Nurturing Agency, Value-driven AI, and Vigilance) emphasizes the need for accountability in every digital transaction, ensuring transparency and ethical conduct in AI operations.

Conclusion

To ensure AI serves humanity and fosters trust, it must be anchored by human agency, ethical principles, and robust accountability mechanisms. This requires a global commitment to frameworks that prioritize human values, with nations like India poised to lead in developing a human-centric approach to AI governance.

Policy Implications

Specific policy implications include the development of a robust ethical framework for AI, the implementation of human oversight and accountability mechanisms in AI systems, and the establishment of legislative frameworks similar to the EU AI Act. There is also a call for global digital compacts to ensure international cooperation and shared principles for AI governance.

Expert Analysis

The increasing integration of Artificial Intelligence into public and private sectors presents a critical governance challenge. While AI offers immense potential for efficiency and innovation, its deployment without adequate human agency and ethical safeguards risks exacerbating societal inequalities and eroding public trust. A robust regulatory framework is not merely desirable but imperative to guide this technological evolution. India, with its vast digital population and ambitious Digital India initiatives, must prioritize a human-centric approach to AI. This involves not just technological development but also establishing clear lines of accountability and ethical guidelines for AI systems. The absence of a dedicated AI law, unlike the pioneering EU AI Act, leaves a regulatory vacuum that could be exploited, leading to unchecked algorithmic biases and privacy infringements. Drawing lessons from global efforts, India should consider a multi-layered approach. This includes developing national AI policies that mandate transparency, explainability, and human oversight in AI decision-making, especially in critical sectors like healthcare and justice. Furthermore, fostering public awareness and digital literacy is crucial to empower citizens to understand and engage with AI systems responsibly. The concept of MANAV, emphasizing Moral Accountability, Nurturing Agency, Value-driven AI, and Vigilance, offers a valuable framework. This model aligns with India's philosophical traditions and can serve as a guiding principle for developing indigenous AI solutions that are not only technologically advanced but also ethically sound and socially responsible. Such an approach would position India as a leader in responsible AI governance on the global stage, contributing to the UN Secretary-General's global digital compact.

Visual Insights

AI Systems: Risks & India's Response (March 2026)

Key statistics highlighting recent challenges and India's efforts in building responsible AI systems.

AI Policy Rejections (Tier 2/3)
~68%

An audit in 2024 found AI-driven claim approvals rejected ~68% of policies from Tier-2 and Tier-3 districts due to biased training data, highlighting fairness issues.

Deepfake Scam Loss
₹25.6 million

In early 2024, a Hong Kong-based multinational lost this amount due to a deepfake scam, demonstrating the weaponization of generative AI.

Payments Platform Sales Freeze
₹2 billion

A June 2024 incident saw an Indian payments platform's AI-driven fraud detection engine flag legitimate transactions, causing a temporary freeze of this amount, highlighting model drift risks.

GPUs Onboarded (IndiaAI Mission)
38,000+

Under the IndiaAI Mission, over 38,000 GPUs have been onboarded through a subsidized national compute facility, boosting indigenous AI development.

Evolution of AI & India's Governance Framework

Key milestones in the history of Artificial Intelligence and the development of India's AI governance strategy.

The journey of AI from a theoretical concept to a practical tool has been marked by rapid technological advancements. India's strategy has evolved from initial adoption goals to a comprehensive governance framework, driven by both the potential of AI and the emerging ethical and safety challenges.

  • 1950Alan Turing proposes the Turing Test, a foundational concept for AI.
  • 1956The term 'Artificial Intelligence' is coined at the Dartmouth conference.
  • 1980s-90sRise of machine learning, allowing systems to learn from data.
  • 2018NITI Aayog releases 'National Strategy for Artificial Intelligence #AIforAll', laying groundwork for India's AI vision.
  • Early 2024Deepfake scam (₹25.6 million loss) and AI bias in insurance (68% rejections) highlight urgent risks.
  • June 2024Indian payments platform faces ₹2 billion sales freeze due to AI model drift.
  • 2025Repealing and Amending Bill, 2025, signals modernization of governance, influencing tech regulation.
  • 2026India hosts global AI summit; unveils 'India AI Governance Guidelines', 'MANAV framework', 'AI Safety Institute', and 'AI Governance Group (AIGG)'.

Quick Revision

1.

Human agency is crucial for building trust in Artificial Intelligence systems.

2.

AI systems risk eroding public trust and exacerbating societal biases without human oversight.

3.

AI should be a tool that serves humanity, not an autonomous entity without moral guidance.

4.

Ethical frameworks and legislative measures are essential for governing AI.

5.

The UN Secretary-General has called for a global digital compact for digital cooperation.

6.

The EU AI Act is a pioneering legislative framework for regulating AI.

7.

India is uniquely positioned to champion a human-centric approach to AI governance.

8.

The MANAV model (Moral Accountability, Nurturing Agency, Value-driven AI, and Vigilance) emphasizes accountability in digital transactions.

Exam Angles

1.

GS Paper 3: Science and Technology - Developments in AI and their applications, ethical implications of technology.

2.

GS Paper 4: Ethics, Integrity, and Aptitude - Ethical dilemmas in the use of AI, accountability, transparency, and human values in governance.

3.

GS Paper 2: Governance - Role of government in regulating emerging technologies, policy frameworks for digital transformation.

More Information

Background

Artificial Intelligence (AI), as a field, has evolved significantly since its inception, moving from theoretical concepts to practical applications impacting various aspects of human life. Early discussions around AI often focused on its technical capabilities, but as AI systems became more sophisticated, concerns about their societal implications began to emerge. The concept of AI Ethics gained prominence as researchers and policymakers recognized the potential for AI to perpetuate or even amplify human biases present in training data, leading to unfair or discriminatory outcomes. Historically, technological advancements have often raised questions about human control and accountability. The current emphasis on human agency in AI stems from a growing awareness that while AI can offer immense benefits, its autonomous operation without clear human oversight can lead to unforeseen ethical dilemmas and a loss of public trust. This necessitates a proactive approach to embed human values and ethical considerations into the very fabric of AI design and deployment, rather than addressing issues retrospectively.

Latest Developments

In recent years, there has been a global push towards developing frameworks for Responsible AI. Organizations like the OECD, UNESCO, and the European Union have published guidelines and recommendations emphasizing principles such as transparency, fairness, accountability, and human oversight in AI systems. India, through initiatives like NITI Aayog's AI Strategy, has also articulated its vision for 'AI for All,' focusing on inclusive and ethical development of AI technologies. Several countries are actively exploring regulatory measures to govern AI, particularly concerning data privacy, algorithmic bias, and accountability for AI-driven decisions. The discussions around a comprehensive data protection law in India, such as the Digital Personal Data Protection Act, 2023, are also intrinsically linked to ensuring ethical AI development, as data forms the bedrock of AI systems. Future steps are expected to involve the creation of dedicated AI governance bodies, the establishment of clear legal liabilities for AI-related harms, and fostering international cooperation to set global standards for ethical AI.

Frequently Asked Questions

1. Why is there a renewed global emphasis on 'human agency' in AI now, rather than just focusing on technological advancements?

The shift towards emphasizing human agency in AI is driven by the increasing sophistication and widespread deployment of AI systems. As AI impacts more aspects of life, concerns have grown about its potential to erode public trust and exacerbate existing societal biases if not guided by human values and oversight. Recent discussions highlight the need for a comprehensive framework where AI serves humanity's best interests, rather than operating autonomously without moral or ethical guidance.

2. The UN Secretary-General's call for a global digital compact is mentioned. What is its significance for Prelims, and what's a common trap UPSC might set?

For Prelims, the significance lies in recognizing the global push for digital cooperation and governance, especially concerning emerging technologies like AI. The UN Secretary-General's call underscores the need for international ethical frameworks and legislative measures for AI. A common trap could be confusing this 'global digital compact' with other specific digital initiatives or attributing the call to a different international body or country.

Exam Tip

Remember that the 'global digital compact' is a broad initiative for digital cooperation, called for by the UN Secretary-General, not a specific AI-only treaty. Focus on the 'who' and 'what' of such international calls.

3. How does India's 'AI for All' strategy by NITI Aayog align with the global push for human agency and ethical AI?

India's 'AI for All' strategy, articulated by NITI Aayog, aligns well with the global emphasis on human agency and ethical AI. This strategy focuses on the inclusive and ethical development of AI technologies. By prioritizing inclusivity and ethics, India inherently acknowledges the need for human oversight, accountability, and the embedding of human values in AI systems, ensuring that AI serves the broader societal good rather than operating without moral guidance.

4. What does it mean for AI to 'exacerbate existing societal biases' without human oversight, and how can human agency prevent this?

AI systems learn from the data they are trained on. If this data reflects existing societal biases (e.g., historical inequalities in hiring or lending), the AI can learn and perpetuate these biases, leading to unfair or discriminatory outcomes. Human agency is crucial to prevent this by ensuring:

  • Careful curation and auditing of training data to identify and mitigate biases.
  • Design of algorithms with fairness and equity as core principles.
  • Continuous monitoring and evaluation of AI system outputs for biased results.
  • Establishing clear accountability mechanisms for AI-driven decisions.
5. If a Mains question asks about 'building trust in AI systems,' how can I effectively integrate the concept of 'human agency' into my answer?

To effectively integrate 'human agency' into a Mains answer on building trust in AI, structure your points around human involvement at every stage of the AI lifecycle. Emphasize that trust comes from ensuring AI is a tool serving humanity, not an autonomous entity. Your answer should cover:

  • Design Phase: Human-centric design principles, ethical considerations embedded from the start.
  • Data Curation: Human oversight in selecting and cleaning data to prevent biases.
  • Deployment & Monitoring: Human intervention for critical decisions, continuous human monitoring for performance and ethical compliance.
  • Accountability: Clear human accountability for AI system outcomes.
  • Ethical & Legislative Frameworks: Human-led development of robust ethical guidelines and legislative measures.

Exam Tip

Instead of just listing points, explain *how* human agency contributes to trust in each aspect. Use keywords like 'accountability,' 'transparency,' and 'ethical design' to enrich your answer.

6. What is the distinction between 'AI Ethics' and 'Responsible AI' as discussed in the context of human agency?

While often used interchangeably, 'AI Ethics' generally refers to the theoretical principles and moral considerations guiding the development and use of AI. It's about *what* is right or wrong. 'Responsible AI,' on the other hand, is the practical application of these ethical principles through concrete frameworks, guidelines, and operational practices, ensuring human oversight and accountability. It's about *how* to implement ethical AI in practice, making human agency central to its design and deployment.

7. Beyond guidelines, what kind of 'legislative measures' are being considered globally to ensure human agency and ethical AI?

Globally, discussions are moving towards concrete legislative measures to govern AI and ensure human agency. These measures often include:

  • Laws mandating transparency in AI decision-making processes.
  • Regulations on data privacy and the ethical use of personal data for AI training.
  • Establishing clear accountability frameworks for harm caused by AI systems.
  • Requirements for human oversight in high-risk AI applications (e.g., in healthcare or justice).
  • Prohibitions on certain AI uses deemed ethically unacceptable.
8. What are the primary challenges in implementing 'human intervention at every stage of the AI lifecycle' as advocated for building trust?

Implementing human intervention at every stage of the AI lifecycle, while crucial for trust, faces several practical challenges:

  • Scalability: Manually overseeing vast amounts of data and complex algorithms at scale is difficult.
  • Complexity: AI systems can be 'black boxes,' making it hard for humans to understand their internal workings or decision logic.
  • Cost & Resources: Requires significant investment in skilled personnel, training, and tools for oversight.
  • Defining 'Human': Deciding who the 'human in the loop' should be, their expertise, and their ultimate authority.
  • Accountability Gaps: Establishing clear lines of accountability when multiple human and AI agents are involved.
9. How does the emphasis on human agency in AI fit into the broader global trend of digital governance and cooperation?

The emphasis on human agency in AI is a cornerstone of the broader global trend towards responsible digital governance and cooperation. It signifies a shift from purely technological advancement to a more holistic approach that considers the societal, ethical, and human rights implications of digital technologies. This aligns with calls for a global digital compact, aiming to establish international norms and frameworks for how digital technologies, including AI, are developed, deployed, and governed to ensure they serve humanity's best interests and do not exacerbate global inequalities or conflicts.

10. What is the core message UPSC examiners would want to see regarding the relationship between AI and human values?

The core message UPSC examiners would expect is that AI must be viewed as a tool designed to *serve* humanity's best interests, not an autonomous entity operating without moral or ethical guidance. The relationship should be one where human values and oversight are *embedded* at every stage of the AI lifecycle, ensuring AI *augments* human capabilities and decision-making, rather than replacing human judgment in critical areas. It's about AI being a force for good, guided by human ethics.

Exam Tip

When discussing AI and human values, always emphasize AI as a 'tool' or 'augmenter' under human control, not a 'master' or 'replacement'. Use phrases like 'human-centric AI' or 'AI for good'.

Practice Questions (MCQs)

1. With reference to 'Human Agency' in Artificial Intelligence (AI) systems, consider the following statements: 1. It primarily refers to the ability of AI systems to make autonomous decisions without human intervention. 2. Embedding human agency aims to ensure accountability and mitigate societal biases in AI development. 3. International guidelines for Responsible AI, such as those by OECD, emphasize the importance of human oversight. Which of the statements given above is/are correct?

  • A.1 and 2 only
  • B.2 and 3 only
  • C.3 only
  • D.1, 2 and 3
Show Answer

Answer: B

Statement 1 is INCORRECT: 'Human Agency' in AI systems refers to the active role of humans in guiding, overseeing, and controlling AI, ensuring that AI serves human values and goals. It is precisely *against* AI making autonomous decisions without human intervention. The concept emphasizes human control, not AI autonomy. Statement 2 is CORRECT: Embedding human agency in AI development is crucial for ensuring accountability, as humans remain responsible for AI's actions and outcomes. It also helps in mitigating societal biases by allowing human intervention to identify and correct biases in data and algorithms. Statement 3 is CORRECT: International organizations like the OECD (Organisation for Economic Co-operation and Development) have indeed published principles for Responsible AI, which consistently highlight the importance of human oversight, transparency, and accountability to build trust in AI systems.

Source Articles

RS

About the Author

Richa Singh

Science Policy Enthusiast & UPSC Analyst

Richa Singh writes about Science & Technology at GKSolver, breaking down complex developments into clear, exam-relevant analysis.

View all articles →

GKSolverToday's News