Welcome to प्रragya(IPA – /praːgjaː/ 🔊) Lab @ BITS Goa, India

vision.png

While Responsible AI has become the dominant paradigm in discussions of ethical AI development, our research team has introduced the concept of CIVILIZING AI. This framework seeks to move beyond aspirational principles toward quantifiable, actionable standards for AI alignment and safety. At its core, CIVILIZING AI proposes four key metrics: (i) the AI Detectability Index (ADI) to assess how transparently and eloquently AI communicates its artificial nature; (ii) the Hallucination Vulnerability Index (HVI) to measure susceptibility to factual errors and ungrounded responses; (iii) the Adversarial Attack Vulnerability Index (AAVI) to evaluate resilience against prompt injection and other adversarial exploits; and (iv) the Carbon Emission Index (CarbonAI) to quantify the environmental impact of AI inference and training. Together, these metrics provide a balanced view of AI systems, ensuring usability, trust, and environmental responsibility while safeguarding against vulnerabilities. Our vision charts a path toward three generations of CIVILIZED AI systems, each progressively advancing in transparency, robustness, and policy compliance, and ultimately supporting AI that aligns with constitutional, cultural, and societal principles.

प्रragya is a vision to civilize future-generation machines, guiding them beyond raw computational capability toward wisdom, responsibility, and ethical discernment. At the heart of this vision lies our pioneering concept of Neural DNA (nDNA), which offers a profound lens through which to interpret the life cycle of foundation models—not as static artifacts of training, but as evolving semantic organisms. Through the nDNA framework, we illuminate how these models inherit, mutate, and transmit latent beliefs, cultural priors, and alignment traits across their developmental stages. This perspective transforms the study of AI from engineering isolated systems to cultivating living semantic entities that can harmonize with human values, societal norms, and constitutional principles—hallmarks of the CIVILIZING AI philosophy championed by प्रragya.


Media Coverage


Latest News

Jun 01, 2025 3 papers accepted in ACL: 2 Findings, 1 in SRW

Selected publications

  1. ACL 25
    SEPSIS: I Can Catch Your Lies – A New Paradigm for Deception Detection
    Anku Rani, Dwip Dalal, Shreya Gautam, and 5 more authors
    2023
  2. ACL 25
    DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization
    Amitava Das, Sameer Trivedy, Deepansh Khanna, and 8 more authors
    In Proceedings of ACL 2025, 2025
    A*
  3. ACL 25
    YINYANG-ALIGN: Benchmarking Contradictory Objectives and Proposing Multi-Objective Optimization based DPO for Text-to-Image Alignment
    Amitava Das, Yogesh Narsupalli, Gautam Singh, and 5 more authors
    In Proceedings of ACL 2025, 2025
    A*
  4. COLING 25
    Exploring the Abilities of Large Language Models to Solve Proportional Analogies via Knowledge-Enhanced Prompting
    T. Wijesiriwardene, R. Wickramarachchi, S. R. Vennam, and 5 more authors
    In Proceedings of COLING 2025, 2025
    B
  5. EMNLP 24
    Counter Turing Test (CT2): Investigating AI-Generated Text Detection for Hindi
    Isha Kavathekar, Anku Rani, Ananya Chamoli, and 3 more authors
    In Findings of EMNLP 2024, 2024
  6. EMNLP 23
    FACTIFY3M: A benchmark for multimodal fact verification with explainability through 5W Question-Answering
    Vipula Rawte and Amitava Das
    In Proceedings of EMNLP 2023, 2023
    Oral, A*
  7. EMNLP 23
    The Troubling Emergence of Hallucination in Large Language Models - An Extensive Definition, Quantification, and Prescriptive Remediations
    Vipula Rawte, S. Chakraborty, A. Pathak, and 5 more authors
    In Proceedings of EMNLP 2023, 2023
    Oral, A*
  8. EMNLP 23
    Counter Turing Test (CT2): AI-Generated Text Detection is Not as Easy as You May Think
    Shreya Chakraborty, Aman Chadha, Aishwarya Sheth, and 1 more author
    In EMNLP 2023, 2023
    A*