AI has become the operating environment of higher education, not a future prospect. These guidelines translate the MITAOE AI Policy into daily decisions — what to do on the first day of semester, how to design assessments, how to handle suspected misuse, and how to use AI responsibly.
"The intent is not to police AI. The intent is to graduate engineers and designers who can think with AI, think against it when needed, and think beyond it altogether."
These guidelines are written so any faculty member can read only the section relevant to their immediate task — preparing a syllabus, redesigning an assessment, drafting a paper — without reading the full document. Revisit them each semester, because AI capability keeps evolving.
Foundational Principles
Ten Principles Every Faculty Member Must Know
These underpin every guideline in this document — classroom, course design, assessment, research and ethics.
⚖️
You are accountable, not the AI
Whatever AI produces — lesson plan, rubric, research paragraph — you remain fully responsible for its accuracy, fairness and quality.
📢
Disclose your own AI use first
State on Day 1 which AI tools you used to prepare the course. Students cannot be expected to disclose what they see their teachers hide.
🔍
Verify before you share
Treat every AI output as a draft from a confident but unreliable assistant. Check facts, citations, formulas and code before class.
🔒
Never share confidential data
Student records, marks, unpublished research and exam papers must not enter any public AI tool. Use only institution-approved tools.
📚
Build your own AI literacy first
Complete the AI literacy orientation before integrating AI into a course. We cannot guide students through territory we haven't walked.
🎯
Extend pedagogy, don't replace it
AI removes drudgery so more class time goes to discussion, lab work and design critique — exactly where deep learning of the Student happens.
🌍
Design for equity
Not every student has a paid subscription. Every AI-permitted task must be completable with free tools.
🇮🇳
Localise actively
Most AI models reflect Western, English data. Adjust examples, names and case studies to Indian engineering and design contexts.
Do not upload copyrighted textbooks or paywalled articles to AI tools. Cite AI-generated content following the institutional format (APA).
🔄
Adapt every semester
AI capability changes every quarter. Review your course AI policy each semester and share learnings with the AI Implementation Committee.
Learning Outcomes
Redesign Outcomes that AI Can Already Complete
If a student can produce a passable answer in five minutes using AI, the outcome is testing the tool, not the student.
The fix: add local context, defence under questioning, or a tacit skill — none of which AI can easily fake.
Old Outcome (BT: Remember / Understand)
Revised Outcome — Higher-Order Thinking
BT Level
Summarise Topic X
Compare two approaches to X, justify a choice for a given engineering context, and defend it in a viva
Evaluate — L5
Explain how a system works
Diagnose a fault in a malfunctioning system, identify root cause, and recommend corrective action
Analyse — L4
Write code to solve a problem
Design, debug and justify a working solution — then critically evaluate an AI-generated solution to the same problem
Create — L6
Analyse a case study
Evaluate an Indian industry case using own field-visit observations, draw evidence-based conclusions, and present orally
Evaluate — L5
Draft a project report
Design a solution, produce the project technical report, and defend methodology, trade-offs and limitations in a viva
Create — L6
BT = Bloom's Taxonomy revised levels: L4 Analyse · L5 Evaluate · L6 Create — the three higher-order levels AI cannot independently demonstrate.
💡
Three-question test for every Course Outcome: (1) Can AI produce a passable answer in 5 minutes? (2) Which part can only be demonstrated personally, in front of me? (3) Where does the student have to use judgement or tacit skill? If a CO fails these, rewrite it.
Course-Level AI Stance
Five AI Permission Levels
Choose your course level before the semester. State it in the syllabus, on the LMS, and on Day 1 — students should never have to guess what is permitted.
1
Forbidden
No AI of any kind
For foundational courses where the unaided skill is the objective — first-year maths, core programming logic, technical drawing.
2
Ideation
Brainstorm or outline only — no AI content in the final submission
Structure and ideas must be the student's own. Suited to first-year writing and conceptual design courses.
3
Editing
Grammar, syntax and reference formatting only
Substantive content, analysis and conclusions must be the student's own. Suited to lab reports and technical writing.
4
Collaborator
AI may co-produce — student must revise, justify and demonstrate understanding
Suited to capstone projects, design studios and advanced electives. Disclosure and oral defence required.
5
Integrated
Using AI effectively is itself the learning task
Suited to AI ethics, prompt engineering and AI-assisted product design. The collaboration process is assessed directly.
Sample Syllabus Statement — Level 3
"In this course, you may use AI for grammar correction, code-syntax checking and reference formatting only. AI-generated analysis, arguments or conclusions are not permitted. Every submission requires the AI Disclosure Statement. Undisclosed or out-of-scope use is treated as an academic integrity breach."
Assessment Framework
Three-Tier Assessment Structure
Map every assessment to one of three tiers. Every course must include at least one Tier 1 component contributing meaningfully to the final grade.
Tier 1
No AI
Individual mastery under invigilation
The student works entirely independently. Confirms personal understanding without any AI support.
Ideation or review only — student demonstrates own reasoning
AI may scaffold the work; analysis, argument and conclusions must be the student's. Disclosure mandatory.
Project reports · Term papers · Case studies · Design rationales
Tier 3
Full AI
Effective AI use is part of the assessed task
Students are assessed on the quality of the artefact and the quality of their AI collaboration and critical judgement.
AI-assisted design · Prompt engineering · AI ethics assignments
When AI Misuse is Suspected
1
Meet promptly
Talk to the student soon after submission while the work is fresh.
2
State the concern specifically
Identify exactly what raised concern and ask whether AI was used — without pre-judging.
3
Give a fair opportunity to explain
Allow the student to explain their process fully.
4
Follow institutional procedure
If concerns persist, refer to the MITAOE academic integrity procedure (including Grade Moderation, Answersheet Showing, and Rubrics-based Assessment) — do not act unilaterally on the grade.
5
Make it a teaching moment
Regardless of outcome, use the conversation to build responsible AI habits.
⚠️
Do not rely on AI-detection tools as sole evidence. False-positive rates are too high for disciplinary action. Do not input student work into public AI tools for detection purposes.
Required for All Tier 2 & Tier 3 Submissions
AI Disclosure Statement
Complete all four fields and attach to every Tier 2 and Tier 3 assignment. Partial disclosure is treated as non-disclosure. Faculty should adapt this format for their own course-preparation notes.
AI DISCLOSURE STATEMENT
Attach to every Tier 2 and Tier 3 submission · Partial disclosure = non-disclosure
AI Tool(s) Used
List each tool by name and version — e.g., ChatGPT (GPT-4o), Claude 3.7 Sonnet, Google Gemini, GitHub Copilot, Grammarly AI.
Purpose & Role
What was the AI used for? — brainstorming, drafting, code assistance, data analysis, grammar correction, literature search, image generation, etc.
Extent of Use
Describe the proportion of AI involvement — e.g., "20% of the code was AI-generated, then debugged and tested by me," or "AI drafted the introduction, which I substantially revised."
Critical Evaluation
Briefly explain how you verified accuracy, corrected errors and ensured quality before including the AI output in this submission.
Faculty Course-Preparation Disclosure (for your syllabus)
"Sections of this syllabus and the practice question set were drafted with assistance from [tool name and version]. All content has been reviewed, verified and revised by the course faculty, who bears full responsibility for accuracy and pedagogical appropriateness."
Mentoring Students
Guiding Students to Use AI Well
Faculty are the most influential guides students have for responsible AI use. Model what you expect.
🎓
Model disclosure openly
Share how you used AI to prepare a lecture and what you changed. When students see disclosure as professional practice, they adopt it naturally.
🔍
Coach verification as a graduate skill ⭐
Prompt design, hallucination recognition and source cross-checking are professional competencies increasingly tested in placement interviews. Verification is the most transferable AI skill you can teach.
💬
Require reflection, not just output
Ask students to articulate what AI contributed and what they added. That reflection is itself a learning outcome, and it builds metacognitive awareness.
🤝
Treat early lapses as teaching moments
Where policy allows, guide before escalating. The goal is to build lasting disclosure habits — not to catch students out.
Quick Reference
Faculty Do's & Don'ts
Print this and keep it at your desk or post it on your course LMS — Moodle or Google Classroom.
✔ Do
✔Disclose which AI tools you used to prepare the course on Day 1
✔Verify every AI output for accuracy before sharing with students
✔State a clear AI permission level for each course and each assignment
✔Include at least one Tier 1 (invigilated) assessment per course
✔Draft feedback with AI — then review and personalise before sending
✔Use only institution-approved tools for tasks involving student data
✔Redesign any Course and Learning Outcomes that AI can complete in five minutes
✔Revisit and update your course AI policy every semester
✗ Don't
✗Present AI-drafted material as entirely your own
✗Trust AI on technical accuracy without independent verification
✗Leave students uncertain about what AI use is permitted
✗Make all assessments open-ended take-home writing tasks
✗Use AI to assign final grades without your own independent review
✗Paste student names, roll numbers or marks into any public AI tool
✗Carry pre-AI course outcomes unchanged into today's assessments
✗Rely on AI-detection tools as sole basis for disciplinary action
References & Acknowledgements
Built on Global Best Practice
Drawn from leading universities and international bodies — adapted for engineering and design education at MITAOE.
1
MITAOE AI Integrated Teaching & Research Policy (2025–26)Internal foundational policy, MIT Academy of Engineering
2
Harvard UniversityGuidelines for Using Generative AI Tools at Harvard
3
Stanford UniversityPrinciples for AI Use (January 2025)
4
King's College LondonUniversity-wide Principles and Policy on Generative AI
5
University of SydneyAligning Assessments to the Age of Generative AI
6
Aalto UniversityAI Assessment Scale (AIAS) for Education
7
NEP 2020Government of India — National Education Policy
8
NITI AayogResponsible AI for All: Adopting and Scaling Responsible AI in India
Student Guidelines
AI is a Tool. You are the Engineer.
AI tools can accelerate your learning, but they cannot replace your thinking, your judgement, or your professional skills. These guidelines tell you exactly what is allowed, what is not, and how to use AI in a way that actually makes you better — not just faster.
"The goal is not to prevent you from using AI. It is to make sure that when you graduate, you can do things AI cannot — and you know the difference."
6
Core human skills that AI cannot replace
5
AI Permission Levels across your courses
1
Disclosure form required per Tier 2/3 submission
0
Tolerance for undisclosed AI use
Understanding AI Tools
What AI Can and Cannot Do
Before you use AI, you need to know what it actually is — and what its limits are. This protects you from submitting incorrect work and from developing a false sense of understanding.
✅
What AI is Good At
Generating first drafts and outlines quickly
Explaining concepts in simple language
Debugging code and suggesting fixes
Summarising long documents
Grammar checking and language editing
Brainstorming ideas and alternatives
Formatting references and citations
⚠️
Where AI Fails Dangerously
It confabulates (invents) facts, names, citations
It cannot verify its own output
It reflects biases present in its training data
It cannot reason about your specific lab results
It cannot replace domain expertise or judgement
It has a knowledge cutoff — recent events may be wrong
It cannot ethically make decisions for you
⚠️
Hallucination is real and dangerous. AI tools regularly fabricate references, paper titles, author names and even formulas — with complete confidence. Always verify AI-generated facts against a primary source before using them in any submission.
Permissions in Your Courses
When AI is Permitted — Know Your Level
Each of your courses carries an AI Permission Level (1–5). Your faculty will declare this on Day 1 and in the course syllabus. If you are ever unsure, ask — do not assume.
1
Forbidden
No AI use of any kind
Typically first-year maths, programming fundamentals, technical drawing. The whole point is to build your unaided capability. Using AI here is an academic integrity violation.
2
Ideation
You may use AI only to brainstorm or outline
Your final submission must be entirely your own words and ideas. AI may help you organise thoughts, but cannot contribute any content to the submitted work.
3
Editing
Grammar, syntax and reference formatting only
You may use tools like Grammarly for language, or AI to format references. All analysis, argument and conclusions must be your own. Disclosure form required.
4
Collaborator
AI may co-produce — but you must revise, justify and demonstrate understanding
You must be able to explain every part of your submission. Expect oral defence or follow-up questions. Disclosure form is mandatory. Submitting AI output without understanding it is a violation.
5
Integrated
Using AI well is itself the assessed skill
You are being graded on how thoughtfully you collaborate with AI — your prompt quality, critical evaluation, and what you add beyond the AI output.
🚨
Using AI beyond the permitted level is an academic integrity breach — equivalent to plagiarism, regardless of whether you edited the output. The key rule: when in doubt, disclose and ask your faculty member.
Disclosure Requirements
How to Disclose Your AI Use
Disclosure is a professional practice — not an admission of wrongdoing. Engineers disclose their tools and methods; so do researchers and designers. Start building this habit now.
STUDENT AI DISCLOSURE STATEMENT
Attach to every Tier 2 and Tier 3 submission · Complete all fields honestly
Tool(s) Used
Name every AI tool used — e.g., ChatGPT (GPT-4o), Gemini 1.5, Claude, GitHub Copilot, Grammarly. Include the version if known.
How I Used It
Describe exactly what you asked the AI to do — generate an outline, check grammar, explain a concept, write code, summarise a paper, etc.
What I Changed
Explain what you revised, added, corrected or replaced in the AI output — and why. "I used it as-is" is not acceptable for Tier 2/3 work.
How I Verified
State how you checked the AI output for accuracy — cross-referencing textbooks, running code, consulting papers, checking calculations manually, etc.
Example — Good Disclosure Statement
"I used ChatGPT (GPT-4o) to generate an initial outline for Section 2. I rewrote the entire section using my own analysis and added data from our lab experiment. I used Grammarly to check grammar only. I verified the formula on page 4 against the course textbook (Shigley, Chapter 6) because the AI output had an error in the stress equation."
Academic Integrity
What Counts as AI Misuse
Academic integrity violations involving AI are treated the same as plagiarism at MITAOE. Understanding the line protects you.
✅ Acceptable
Using AI to brainstorm ideas, then writing in your own words
Asking AI to explain a concept you then demonstrate yourself
Using AI to check grammar after writing your own content
Using AI to format your reference list
Submitting AI-co-produced work at Level 4/5 with a full disclosure form
Asking AI to review your code for errors, then understanding and fixing them yourself
🚫 Not Acceptable
Submitting AI-generated text without disclosure
Using AI beyond the permitted level for that course
Copying AI output without verifying or revising it
Submitting work you cannot explain or defend
Asking AI to answer exam or viva questions on your behalf
Using AI-generated citations without verifying they exist
💬
If you're not sure — ask your faculty before submitting, not after. A quick question before submission is far better than an integrity inquiry after.
Your Competitive Advantage
6 Skills That AI Cannot Replace
AI will be your professional tool throughout your career. But employers — and clients — hire the person, not the tool. These six human capabilities will define your edge as an engineer, researcher, and professional.
01
🔎
Critical Evaluation
Ability to verify, validate, question, and critically assess AI-generated information, evidence, and decisions. Knowing when AI is wrong — and why — is itself a graduate-level skill.
02
🧪
Applied Competence
Ability to apply knowledge effectively through practical execution, experimentation, demonstration, and real-world performance. AI cannot run your lab, calibrate your instrument, or take responsibility for your results.
03
🤝
Reasoned Communication
Ability to explain, justify, defend, and articulate decisions, methodology, and conclusions. In a viva, placement interview, or client meeting — you must own every word.
04
🌐
Contextual Innovation
Ability to apply knowledge creatively and ethically in domain-specific, local, interdisciplinary, or real-world contexts. AI optimises within a given frame; you define the frame and adapt it to reality.
05
💡
Problem Framing & Curiosity
Knowing which problem to solve before solving it. Asking the right question — driven by intellectual curiosity and domain intuition — is something no AI can do on your behalf.
06
⚖️
Ethical Responsibility
Taking ownership of decisions that affect people's safety, livelihoods and rights. AI cannot be held accountable — you can. This is the most irreplaceable skill of all.
Quick Reference
Student Do's & Don'ts
Save this and refer to it before every submission.
✔ Do
✔Check your course AI Permission Level before starting any assignment
✔Submit the AI Disclosure Form with every Tier 2 and Tier 3 submission
✔Verify every AI-generated fact, formula and citation independently
✔Be able to explain and defend every part of your submitted work
✔Use AI as a starting point — then add your own analysis and judgement
✔Ask your faculty when you are unsure what is permitted
✔Use free AI tools (ChatGPT free, Gemini) so cost is never a barrier
✔Report AI errors and biases you notice — that critical eye has value
✗ Don't
✗Submit AI-generated work without a disclosure form
✗Assume AI output is accurate — always cross-check
✗Use AI beyond the permission level for your course
✗Submit work you cannot explain if your faculty asks about it
✗Enter your classmates' names or personal data into public AI tools
✗Rely on AI as a substitute for attending class or understanding concepts
✗Use AI-generated references without verifying they actually exist
✗Let AI make ethical or safety decisions in your project
Approved & Restricted Tools
Which Tools You Can Use
The following guidance applies to personal academic work. Always check with your faculty for course-specific tool restrictions.
✅ Generally Permitted (free tier)
ChatGPT (GPT-4o mini / free)Text, code, explanations — disclose use
Google Gemini (free)Writing, summarisation, research assistance
Microsoft Copilot (free)Integrated in Edge browser and Office apps
Grammarly (free tier)Grammar and language editing only
GitHub Copilot (student free)Code assistance — disclose in submissions
Google NotebookLM (free)Note-taking and study assistance
⚠️ Restrictions Apply
Any tool requiring data uploadDo NOT upload exam papers, classmate data, or institute documents
AI image generatorsCheck assignment brief — may not be permitted
Paid/institutional AI toolsOnly if institution-provided — do not use personal subscriptions for institute data
AI code-writing tools in examsNever permitted in any invigilated setting
AI paraphrasing toolsTreated as submission of AI content — disclose and use carefully
Research & Scholarly Integrity
AI in Research — Clear Boundaries, High Standards
AI can accelerate research — but it cannot replace scientific rigour, ethical judgement, or the intellectual ownership that defines authorship. These guidelines align with national and international research-integrity norms, including UGC guidelines and major publisher policies.
"AI is a research instrument, not a co-investigator. You control the hypothesis, the method, the interpretation and the accountability."
⚠️
Cautiously acceptable: AI-assisted writing, summarisation — always with full disclosure
✗
Never acceptable: AI authorship, fabricated data, undisclosed AI text
📋
Always required: Disclosure in all publications and grant submissions
Where AI May Be Used — With Caution
AI in Research — Proceed Carefully
AI is not automatically permitted in research. Each use must be evaluated against your institution's policy, your target journal's guidelines, and your research ethics approval. The following areas may involve AI assistance — but only when appropriately disclosed, validated by the researcher, and consistent with the norms of your discipline. When in doubt, do not use AI, or consult your supervisor first.
⚠️
Important: Many journals, funding bodies, and ethics committees restrict or prohibit certain AI uses without explicit approval. Always check the specific requirements of your target venue before using any AI tool in your research workflow.
📚
Literature Discovery — Use carefully
AI discovery tools (Elicit, Semantic Scholar, Consensus) may help identify relevant papers — but you must read the actual papers. Never cite based on an AI summary alone. Verify every reference independently; hallucinated citations are your responsibility.
✍️
Language & Writing Editing — Limited scope only
AI may assist with grammar, language clarity and structural editing — not with generating scientific content, arguments or conclusions. All intellectual contribution must originate from you. Many journals now restrict even language editing by AI.
📊
Data Analysis Support — Full transparency required
AI-assisted analysis may be considered where the methodology is fully disclosed, reproducible, and independently validated. Never use AI to generate, smooth or adjust data — this constitutes research misconduct.
🧪
Experimental Design Input — Advisory only
AI may be consulted as one input when exploring methodological options. However, the researcher must independently evaluate, justify, and take full ownership of every design choice — AI suggestions are not a substitute for expert judgement.
💻
Code & Simulation Drafts — Validate rigorously
AI-generated code for analysis pipelines may be used as a starting point — but must be fully understood, tested, and validated by the researcher before use. Errors in AI-generated code that affect results are the author's responsibility.
🌐
Translation & Reference Formatting — Lower risk, still disclose
AI formatting of reference lists or translation of your own text into another language carries lower risk — but must still be disclosed. Always verify that references are formatted correctly and that translated content preserves your intended meaning.
Prohibited Uses
Research Integrity Red Lines
These acts constitute research misconduct — with consequences ranging from retraction and disciplinary action to career-ending sanctions.
🚫
Data Fabrication & Falsification
Using AI to generate, alter or misrepresent data, results or findings is fabrication — one of the most serious forms of research misconduct. This includes "cleaning" data in ways that bias results.
🚫
Ghost-Writing Entire Papers
Submitting an AI-generated manuscript as your own intellectual contribution — without substantial authorial involvement — is academic fraud, regardless of how much you edited it.
🚫
Listing AI as an Author
AI cannot be an author on any publication, grant application, thesis or report. Authorship requires accountability, which AI cannot hold. This rule is enforced by all major publishers (Elsevier, Springer, Nature, IEEE).
🚫
Using Hallucinated Citations
AI regularly invents paper titles, author names and journal details. Every reference must be manually verified against the actual source before submission. You are responsible for every citation in your paper.
🚫
Uploading Confidential Data to Public AI
Unpublished research data, proprietary industry datasets, participant data and PII must never be entered into any public AI tool (ChatGPT, Gemini, Claude, etc.). Use only offline or institutionally approved tools.
🚫
Undisclosed AI Use in Peer Review
Using AI to write peer review reports without disclosure violates the confidentiality and intellectual integrity of the peer review process. Many journals now explicitly prohibit this.
Research Workflow
AI-Assisted Research Step by Step
Here is a responsible research workflow that integrates AI at the right stages — and keeps the researcher in control at every critical decision point.
1
Define Your Research Question — Independently
The research question must come from your expertise and understanding of the field. AI may help you refine it, but should not originate it.
AI role: Use literature discovery tools to check if the question is novel
2
Review the Literature
Use AI tools to map the field quickly. Then read the actual papers — do not rely on AI summaries for your analysis or argument.
AI role: Elicit, Consensus, Semantic Scholar for discovery · manual reading for understanding
3
Design Methodology — You Decide
Choose your research design, data collection method and analysis approach. You must be able to justify every methodological choice.
AI role: Suggestion and sanity-check only — human expertise must validate
4
Collect & Analyse Data — With Full Integrity
Data collection must follow your ethics approval. AI-assisted analysis is permitted but must be fully described in the methods section.
AI role: Statistical analysis, visualisation, pattern detection — but never data modification
5
Interpret & Discuss — Your Intellectual Contribution
The interpretation of results is the core of authorship. This is where your expertise, knowledge and judgement are irreplaceable.
AI role: Language editing and structuring the discussion only
6
Write, Disclose and Submit
Document all AI tools used in the methods section or acknowledgements. Verify every reference. Confirm authorship criteria for all listed authors.
AI role: Grammar, language clarity and reference formatting
Authorship & Publication Ethics
Responsible Authorship Rules
All publications from MITAOE must follow institutional and publisher guidelines on authorship and AI disclosure.
👤
Who Qualifies as an Author
Authors must satisfy the following authorship requirements:
Substantial contribution to conception, design, data acquisition or analysis
Drafting or critically revising the intellectual content
Final approval of the submitted version
Accountability for all aspects of the work
AI meets none of these criteria and cannot be listed as an author.
🤖
Disclosing AI Use in Publications
The following must appear in every paper that used AI:
Name and version of every AI tool used
Which sections or tasks involved AI assistance
How AI outputs were verified and revised
Confirmation that no confidential data was entered
Typically placed in the Methods section or Acknowledgements.
📋
Publisher policies: Elsevier, Springer Nature, IEEE, Taylor & Francis, Wiley and most other major publishers now require explicit disclosure of AI use. Failure to disclose is treated as an ethical violation and can result in retraction.
Data Privacy in Research
Protecting Research Data & Participants
Research data — especially participant data — carries legal and ethical obligations that must be maintained when using AI tools.
🔒
Never Enter Into Public AI
Participant names, demographics or identifiers
Unpublished experimental data or results
Proprietary industry or partner data
Medical or biometric data of any kind
Survey responses that can identify individuals
Draft manuscripts under journal embargo
✅
Safe AI Practices for Research
Anonymise or aggregate data before any AI analysis
Use institution-approved, privacy-compliant AI tools for sensitive work
Check that your AI tool has a data retention policy aligned with GDPR / DPDP Act 2023
Store all research data on institution-approved servers
Document your data-handling procedures for ethics review
⚠️
India's Digital Personal Data Protection Act 2023 (DPDP Act) imposes obligations on institutions handling personal data. Entering participant data into a foreign AI tool may constitute a compliance breach. Always obtain explicit ethics clearance for your data-handling methodology.
Citation & Attribution
Citing AI — The Right Way
When AI tools contribute to your work, they must be attributed correctly. Different use cases require different citation approaches.
In the Methods Section
"Literature search was supported by Elicit (elicit.com, accessed May 2026). All identified papers were manually verified and read in full. Language editing was performed with assistance from ChatGPT (OpenAI, GPT-4o, accessed April 2026), with all content reviewed and revised by the authors."
In Acknowledgements
"The authors used Claude 3.7 Sonnet (Anthropic) for grammar and clarity editing of this manuscript. The AI tool did not contribute to the scientific content, analysis or conclusions."
In Reference List (APA style)
OpenAI. (2024). ChatGPT (GPT-4o) [Large language model]. https://chat.openai.com
⚠️ Never Do This
Do not cite AI-generated references without verifying they exist in a real database (Google Scholar, PubMed, IEEE Xplore, Scopus). AI-hallucinated citations are a common and serious error.
Before You Submit
Research Integrity Checklist
Run through this checklist before submitting any paper, conference abstract, grant application or thesis chapter.
Authorship
☐All listed authors meet the institutional authorship criteria
☐No AI tool is listed as an author
☐All contributors who do not qualify as authors are acknowledged
AI Disclosure
☐All AI tools used are named and disclosed (name + version)
☐The role of each AI tool is described in Methods or Acknowledgements
☐AI disclosure follows the target journal's policy
References
☐Every reference has been manually verified (paper exists, details are correct)
☐No AI-hallucinated references are included
☐DOIs or stable URLs are provided where available
Data Integrity
☐No confidential or participant data was entered into a public AI tool
☐Data is stored on institution-approved servers
☐Ethics clearance is in place for the study design and data handling
Ethical AI
Why AI Ethics Matters to You
AI systems are not neutral. They reflect the choices of their designers, the biases in their training data, and the values of the institutions that deploy them. As engineers and researchers who will build, deploy or advise on AI systems, your ethical awareness is not optional — it is a professional duty.
"Technology is neither good nor bad; nor is it neutral." — Melvin Kranzberg's First Law of Technology
⚠️
AI can harm people
Biased hiring algorithms, discriminatory lending models, surveillance systems — unethical AI has real consequences for real people.
🌍
AI shapes society
Recommendation systems influence beliefs. Automated decisions affect livelihoods. Engineers who build these systems bear responsibility.
⚖️
Ethics is a professional skill
IEEE, ISTE, and major engineering bodies now list ethical AI competence as a core professional requirement — not an elective.
Core Ethical Principles
Five Pillars of Responsible AI
These five principles — drawn from UNESCO, the EU AI Act, and IEEE Ethically Aligned Design — form the foundation of ethical AI practice.
01
🔍
Transparency
AI systems and their limitations should be explainable to the people they affect. Hidden AI decision-making in high-stakes contexts (hiring, credit, healthcare) is ethically problematic. Always disclose when AI has been used.
02
⚖️
Fairness & Non-discrimination
AI must not systematically disadvantage people based on gender, race, religion, caste, disability or socioeconomic status. Engineers must test for and mitigate bias before deployment.
03
🔒
Privacy & Data Rights
Individuals have the right to know what data is collected, how it is used, and to have it deleted. AI systems must be designed with privacy by design — not added as an afterthought.
04
👤
Human Accountability
AI cannot be held responsible for its outputs. A human must always be identifiable as the accountable decision-maker. "The AI decided" is never an acceptable defence for harm caused.
05
🌱
Sustainability & Beneficence
AI should benefit humanity and the environment. Systems should be designed for long-term societal good — not just short-term efficiency or profit. Energy use and environmental impact must be considered.
Bias & Fairness
AI Bias — Recognise and Resist It
AI models learn from historical data — which often encodes historical injustices. Understanding where bias enters helps you build fairer systems and use AI more critically.
📦
Biased Training Data
If historical data reflects discrimination (e.g., fewer women in technical roles), the model learns and replicates that discrimination.
→
⚙️
Biased Model
The model assigns higher scores to candidates who match the historical pattern — even when the pattern reflects bias, not merit.
→
🚫
Discriminatory Output
Real people are denied jobs, loans or opportunities based on patterns that encode historical inequality — amplified at scale.
🇮🇳
AI Bias in Indian Contexts
Most large AI models are trained predominantly on English-language, Western data
Indian languages, names and contexts are often under-represented
Facial recognition systems have shown higher error rates on darker skin tones
Credit scoring AI may disadvantage rural and informal-economy workers
Always validate AI outputs for your specific Indian engineering context
🛠️
What You Can Do
Ask: "Who is NOT represented in this training data?"
Test your AI system on diverse demographic groups before deployment
Document bias testing in your project report
Use fairness-aware ML libraries (Fairlearn, AIF360) where applicable
Report AI bias you observe — to faculty, to the tool developer, publicly
Privacy & Consent
Data Rights in the Age of AI
AI systems are often data-hungry. Ethical AI practice requires that data collection is lawful, consensual, purposeful and proportionate.
✅
Consent must be informed
People must understand what they are consenting to — in plain language, not buried in terms of service. Consent for one purpose does not extend to another.
✅
Data minimisation
Collect only the data you actually need for the stated purpose. Storing excess data creates excess risk.
✅
Purpose limitation
Data collected for one purpose (e.g., attendance tracking) must not be repurposed without fresh consent (e.g., behavioural analysis).
✅
Right to explanation
Under India's DPDP Act 2023, individuals have the right to know how automated decisions affecting them were made.
✅
Children and students
Student data carries special protections. Any AI system that processes student data requires institutional ethics clearance and explicit parental consent for minors.
Environmental Responsibility
The Hidden Cost of Large AI Models
Every AI query consumes energy. Training large AI models has a significant carbon footprint. Responsible AI use includes considering environmental impact — especially at scale.
~500 ml
Water evaporated per 20–50 ChatGPT queries (for data centre cooling)
~10×
Energy used per AI query vs. a standard Google search
552 t CO₂
Equivalent carbon footprint of training a single large language model
🌱
Practical green AI habits: Use smaller, task-specific models where possible. Avoid repeated large queries for trivial tasks. When building AI systems, consider model efficiency as a design criterion alongside accuracy. Report environmental impact in research papers where feasible.
Digital Equity
The AI Divide — Who Gets Left Behind
Access to powerful AI tools is not equal. The AI divide risks deepening existing inequalities — globally and within India. Ethical engineers design for everyone.
Those with AI Access
High-speed internet
Paid AI subscriptions
English-language fluency
Devices that run AI tools
AI literacy and training
⟺
Those Without
Low-bandwidth or no internet
Cannot afford paid tools
Use regional Indian languages
Older or low-spec devices
No exposure to AI education
⚠️
As a MITAOE engineer or researcher: When designing AI-assisted systems or assessments, always ask — "Does this work for someone without a paid subscription, high-speed internet, or English fluency?" This is not a theoretical question. It is a design requirement.
Interactive Ethics
Ethical Scenarios — What Would You Do?
Work through these real-world AI dilemmas. Choose your response, then reveal the ethical analysis. There are rarely perfect answers — the goal is to develop your ethical reasoning.
Scenario 1 of 6Academic Integrity
📝
You have a project report due tomorrow. You've done the work but haven't had time to write it up properly. You ask ChatGPT to write the report based on your notes and data. You read through it, it's accurate, and you submit it with minor edits. Your course is at Level 3 (grammar only). What are the ethical issues?
❌ Option A — Incorrect
✅ Option B — Correct
⚠️ Option C — Partially right, but still a violation
Analysis: At Level 3, AI may only be used for grammar and formatting. Writing substantive content — even based on your own notes — goes beyond the permitted level. The ideas being yours does not make the expression yours when AI generated it.
Even with disclosure, this violates Level 3. Disclosure is required, but it does not legalise out-of-bounds use. The correct action was to write the report yourself and use AI only to check grammar.
What to do instead: Write the report yourself (even imperfectly), use AI only to check grammar, and disclose that use. If you need an extension, ask for one before the deadline.
Scenario 2 of 6Research Integrity
🔬
You are writing your M.Tech thesis literature review. You use an AI tool that gives you 15 references, all looking plausible — journal names, authors, years. You cite all 15 without checking them. Later, your supervisor finds that 6 of the references don't exist. Who is responsible?
❌ Option A — Incorrect
✅ Option B — Correct
❌ Option C — Incorrect
Analysis: You are 100% responsible. AI tools are known to hallucinate references — this is a well-documented limitation, not a surprise. Academic norms have always required authors to verify their references personally.
This constitutes citation fabrication — a form of research misconduct — regardless of how the error originated. "The AI did it" is not a defence in any academic integrity proceeding.
Rule: Every reference you submit must be verified by you against a real database (Google Scholar, PubMed, IEEE Xplore, Scopus). No exceptions.
Scenario 3 of 6Privacy
🔒
You are doing a capstone project on student performance. You want AI to help identify patterns in dropout risk. You upload a dataset with student names, roll numbers, attendance, grades and family income to ChatGPT to analyse it. What are the problems with this?
❌ Option A — Incorrect
✅ Option B — Correct
❌ Option C — Incorrect
Analysis: This is a serious privacy violation. Personally identifiable student data (names, roll numbers, grades, family income) is protected under India's DPDP Act 2023 and institutional data policies. Uploading it to a public AI tool violates student privacy rights — regardless of your academic purpose.
"Deleting from ChatGPT" does not undo data transmission. OpenAI's servers may retain conversation data per their privacy policy. Once data is uploaded, you cannot control where it goes.
Correct approach: Anonymise the dataset fully (no names, no roll numbers, replace income with brackets), obtain ethics clearance, and use a locally-run or privacy-compliant tool. Better still, work with aggregated data only.
Scenario 4 of 6Bias
⚖️
Your team is building an AI-based resume screening system for a local manufacturing company. You train it on the company's historical hiring data (last 10 years). The model achieves 90% accuracy on the test set. In deployment, it consistently ranks female candidates lower for engineering roles. What is happening?
❌ Option A — Incorrect
✅ Option B — Correct
❌ Option C — Incorrect
Analysis: The model learned from historical hiring decisions that may have been discriminatory. "90% accuracy" is measuring the model's ability to replicate past decisions — including past biases. High accuracy does not mean fairness.
This is algorithmic discrimination — illegal under India's Constitution (Article 15) and increasingly under global AI regulation. Deploying this system harms candidates and exposes the company to legal liability.
What you must do: Audit training data for demographic imbalance. Use fairness metrics (demographic parity, equalised odds). Test outputs across gender, caste, location. Redesign the system with fairness constraints. Document the bias audit in your project report.
Scenario 5 of 6Transparency
🤖
A startup approaches you to build a chatbot for a mental health helpline. The chatbot will respond to users in distress — but users will not be told they are talking to an AI. The startup says "users respond better when they think it's a human." Should you build it?
❌ Option A — Incorrect
✅ Option B — Correct
❌ Option C — Incorrect
Analysis: Deceiving vulnerable users — especially in high-stakes mental health contexts — is a serious ethical violation. The principle of transparency requires that users know when they are interacting with AI. This is now enshrined in the EU AI Act and is expected in India's AI governance framework.
The risk of harm is real: A user in crisis may disclose information they would not share with an AI, may rely on the "relationship" in ways that cause harm when they discover the deception, or may receive inadequate support that a real human would escalate.
Effectiveness does not justify deception. You can build an effective, transparent AI mental health tool that clearly identifies itself as AI. Refuse to build the undisclosed version and explain why.
Scenario 6 of 6Accountability
🏭
An AI quality-control system you designed for a factory incorrectly clears a faulty batch of components. The components are used in a medical device, which fails in a patient. The company says "the AI made the decision." Who is accountable?
❌ Option A — Incorrect
❌ Option B — Partially correct, but incomplete
✅ Option C — Correct
Analysis: AI cannot be held legally or morally accountable — it has no legal standing. Accountability always returns to the humans in the chain: the engineer who designed and validated the system, the company who deployed it without adequate human oversight, and potentially the AI tool's developers if there was a known defect.
In safety-critical applications (medical, aviation, structural, automotive), AI must never be the sole decision-maker. A human override and review step is a mandatory engineering requirement — not optional.
Your responsibility as an engineer: Design safety margins. Insist on human-in-the-loop for high-stakes decisions. Document system limitations clearly. Never overpromise AI reliability. This is not just ethics — it is professional engineering practice.
🎓
All Scenarios Complete
Ethical reasoning is not about finding the "right" answer quickly — it's about developing the habit of asking the right questions before you build, deploy or use AI systems.
Ethical Decision-Making
The MITAOE AI Ethics Framework
When you face an uncertain AI decision — in your studies, research or professional life — run through these five questions.
Q1
Who could be harmed?
Identify every group of people affected by this AI decision — including those not in the room. Consider direct and indirect harms, short-term and long-term.
Q2
Is this transparent?
Would the affected people know AI is involved? Would they understand how the decision was made? Would they consent if they knew?
Q3
Is this fair?
Does this system work equally well for different demographic groups? Have you tested it? Are the consequences of errors distributed equitably?
Q4
Who is accountable?
Is there a named, reachable human who is responsible for this AI's decisions? Is there a redress mechanism for people harmed by the system?
Q5
Would you be comfortable if it were public?
If the design choices, training data, and outcomes of this system were reported in a newspaper, would you be comfortable? If not — reconsider.
📚
Further reading: UNESCO Recommendation on the Ethics of AI (2021) · IEEE Ethically Aligned Design · EU AI Act 2024 · NITI Aayog Responsible AI for All · India's Digital Personal Data Protection Act 2023 · MeitY Draft AI Governance Framework