Kookmin AI Ethics Charter

Kookmin AI Ethics Charter
KOOKMIN 인공지능 교수학습 활용 가이드라인
KOOKMIN 인공지능 교수학습 활용 가이드라인

As a member of Kookmin University, I declare the following regarding the use of Artificial Intelligence:

  • 01

    I will understand the basic principles and latest trends of AI.

  • 02

    I will neither blindly trust nor unconditionally reject AI.

  • 03

    When using AI, it is my responsibility to filter information and verify its accuracy.

  • 04

    I will ensure that AI use does not undermine the university’s goal of fostering creative talent.

  • 05

    I will actively explore AI as a new learning tool.

  • 06

    I will strive to find innovative learning methods using AI.

  • 07

    The use of AI will be mutually agreed upon between professors and students.

  • 08

    I will not use AI-generated results without critical evaluation.

  • 09

    I will clearly disclose whether AI was used when submitting assignments.

  • 10

    Creative questioning and logical criticism are my intellectual strengths when using AI.

국민대학교 생성형 AI활용 가이드라인
국민대학교 생성형 AI활용 가이드라인
I. Overview

With the rapid advancement of generative AI technologies such as ChatGPT, Perplexity, and Gemini, the use of AI across education, research, and administration is expanding quickly.

Amid these rapid changes, Kookmin University has made continuous efforts, grounded in pragmatism, to cultivate practical and convergent talents who can transform society. Since 2015, the university has offered Python programming education through the foundational course Computer Programming II as a basis for AI education. In 2023, it further established a system for utilizing AI professionally and as a tool across the curriculum by declaring the “AI Ethics Charter.”

II. Definitions and Standards

1. Definition of Generative AI

Generative AI refers to artificial intelligence technology that uses deep learning models trained on large-scale data to automatically generate new content, such as text, images, audio, and video, based on user prompts.

2. Major Generative AI Tools

  • TEXT
  • IMAGE
  • CODING
  • ETC

3. Basic Principles

Kookmin University’s use of generative AI adheres to the following core principles.

Principle Description
Transparency Clearly indicate and disclose whether and how AI is used
Accountability The user bears final responsibility for AI-generated outputs
Integrity Do not undermine academic integrity or the value of degrees
Security Protect personal and sensitive information
Fairness Ensure ethical use free from discrimination and bias
Critical Thinking Critically review AI-generated results

4. Students’ Rights to Use Generative AI

Students may use generative AI tools within reasonable limits.
Professors must provide clear guidelines on AI use, ensure students have the opportunity to explain themselves before misconduct is determined, and establish procedures to challenge AI detection results.

III. Usage Restrictions (Prohibited Actions)

1. Prohibited Actions in Learning Activities

The following actions must be avoided. Violations may be considered academic misconduct and subject to disciplinary action.

  • 01

    Unauthorized Ghostwriting

    - Submitting assignments, exams, or papers fully generated by AI as one’s own work

    * Exception: when explicitly permitted by the instructor

  • 02

    Failure to Cite Sources

    - Not indicating which parts were generated using AI

    - Not providing citation sources or prompt details

  • 03

    Attempting to Evade AI Detection

    - Intentionally deceiving AI detection tools

    - Manipulating AI-generated outputs before submission

  • 04

    Unfair Collaboration with Peers

    - Using AI to ghostwrite or share assignments

    - Sharing exam questions/answers via chat rooms or messaging apps

  • 05

    Serious Intellectual Property Infringement

    - Inputting others’ works into AI to generate/distribute content without permissions

    - Providing others’ images, audio, or videos as AI training data

2. Prohibited Actions in Research Activities

  • 01

    Entering Confidential Information

    - Uploading government-funded or university confidential data to public AI

    - Inputting personal information (e.g., resident ID numbers, student data)

  • 02

    Using Unverified Results

    - Using AI-generated data, statistics, or citations without verification

    - Presenting incorrect or biased information as fact

  • 03

    Misrepresentation of Authorship

    - Listing AI tools as authors

    - Failing to disclose AI usage in academic submissions

  • 04

    Research Misconduct

    - Reproducing existing research results using AI and presenting them as new

    - Plagiarism or self-plagiarism

3. Prohibited Actions in Administrative Work

  • 01

    Leakage of Sensitive Information

    - Inputting student, employee salary, or financial data

    - Entering confidential internal documents

  • 02

    Damage to Institutional Credibility

    - Automatically generating official announcements or press releases using AI

    - Publishing official opinions without verification

  • 03

    Unfair Decision-Making

    - Using AI outputs directly without final review

    - Failing to disclose evaluation criteria or reasoning

IV. Recommendations

1. Recommendations for All Users

01

Personal Information Protection

  • Information That Must Not Be Entered into Public AI
    • Personal : ID numbers, phone numbers, emails, addresses
    • Financial : bank accounts, salary, credit card numbers
    • Institutional : student ID numbers,
  • Response Measures
    • * Anonymize sensitive data before use
    • * Use institution-approved AI tools (enhanced security versions)
    •  
02

Obligation to Verify Generated Content (Hallucination Warning)

AI-generated content may not be 100% accurate; therefore, the following steps must be taken:
Fact-Checking Verify with Reliable Third-Party Sources
Bias Review Check for discriminatory or biased expressions
Logical Review Identify logical errors
Currency Review Consider time limitations of training data
03

Mandatory Source Citation

When using generative AI, the following information must be disclosed.
*Examples:
  • Text Generation

    Check for discriminatory or biased expressions

    Example:

    ChatGPT 5.1. (2025.08.20). “[Prompt content]”. https://chat.openai.com

  • Coding Support

    Generative AI Tool Name. (Date). “[Task description]”

    Example:

    GitHub Copilot. (2024.09.17). “[Task: Writing a Python function]”

  • Image Generation

    Generative AI Tool Name. (Date). “[Prompt content]”. URL

    Example:

    DALL·E 3. (2024.05.15). [“Prompt content]”. https://openai.com/dall-e-3s

  • Papers/Assignments

    Generative AI (Tool name, version, date) states as follows: [Quoted content]

    Example:

    Generative AI (ChatGPT 5.1, 2025.10.15) states as follows: [Quoted content]

  • 04

    Security Management

    - Strengthen multi-factor authentication (MFA) (security for additional authentication other than passwords)

    - Avoid public Wi-Fi (use VPN if necessary)

    - Regularly check AI login activity

    - Adjust data settings (disable chat history/model training if possible)

  • 05

    Ethical Use

    - No discriminatory or hateful content

    - No fake news or misinformation

    - No defamation or support for illegal activities

    - Protect minors (no inappropriate content)

  • 06

    Reflecting Academic Diversity

    * As the appropriateness and risks of AI use vary across academic disciplines, a differentiated approach by field shall be applied.

    - Humanities and Social Sciences: Design assignments that emphasize critical thinking and argumentation

    - Science and Engineering: Permit AI-assisted coding while strengthening the evaluation of algorithmic understanding

    - Arts and Physical Education: Require documentation of the creative process, and limit AI use to reference purposes only

2. Recommendations for Instructors

  • 01

    Examples of AI Use Policies Instructors May Adopt

    - Full prohibition / Restricted use (permitted with prior approval or with proper citation) / Unrestricted use

  • 02

    When Preparing the Course Syllabus

    - The following items must be clearly specified in the syllabus:

    Examples of Generative AI Usage Guidelines
    • 1. Policy on the use of generative AI in this course

      Permitted / Restricted / Prohibited

    • 2. Permitted scope of AI use

      Examples: understanding concepts, brainstorming ideas, learning code writing

    • 3. Prohibited scope of AI use

      Examples: full AI-generated submissions, failure to cite sources

    • 4. Mandatory requirements when using AI

      - Citation methods and format

      - Whether prompt disclosure is required

      - Obligation to verify and critically review AI-generated content

  • 03

    Assignment and Exam Design

    - AI replacement challenges: Assignments should be structured to require creative application, critical analysis, and personal reflection rather than simple knowledge reproduction

    • Examples of Effective Assignments

      - “Analyze the ethical issues of ChatGPT and present your own position.”

      - “Apply the concepts learned in this course to a recent social issue.”

      - “If AI tools were used, describe the process and verification steps.”

    • Examples of Weak Assignments

      - “Summarize Chapter 3 of Principles of Economics” (easily replaceable by AI)

      - Exams composed solely of multiple-choice questions

      - Assignments that do not require personalized cases or individual experience

  • 04

    Assignment and Exam Design

    Process-based evaluation: Expand the weight of intermediate outputs generated throughout the learning process by evaluating not only final results but also the writing process and interim reviews
    (Reflect assessment of learning experiences and thought processes that cannot be replaced by AI)

    - Strengthening oral assessment: Require students to explain their papers or assignments and participate in Q&A

    - Peer evaluation: Enhance creativity through mutual feedback among students

    - Self-assessment: Encourage reflection on one’s learning process and use of AI

  • 05

    Student Guidance and Education

    - Explain AI policies during the first class session (at least 15 minutes)

    - Provide training on citation and verification methods

    - Explain limitations of AI detection tools (accuracy 50–80%)

    - Ensure transparent misconduct evaluation: AI detection scores alone shall not be used to determine misconduct, and manual review will be conducted in parallel

3. Recommendations for Students

  • 01

    Attitude Toward AI Use

    - AI is a support tool, not a replacement for thinking

    - Use AI to enhance learning, but ensure full understanding and verification

    - Maintain a balance between convenience and learning

02

Effective Use of AI

Recommended Use
Step 1 Understand concepts, request explanations, generate examples
Step 2 Brainstorm problem-solving ideas, identify errors
Step 3 Review structure, improve expression
Step 4 Verify logic, check supporting evidence
Prohibited use
Requesting fully generated answers, having AI write one’s arguments or claims, skipping the verification process
  • 03

    Areas where AI is not recommended

    - Fields requiring creative thinking: arts, ethical reasoning, critical analysis

    - Tasks requiring personal experience: essays, case analyses, reflection journals

    - Assessment of conceptual understanding: exams, oral assessments

04

Responsible Learning Attitude

Pre-Submission Checklist
Have I clearly disclosed AI use?
Have I verified the generated content?
Have I distinguished my own ideas from AI suggestions?
Have I properly cited sources?
Does this constitute academic misconduct?

4. Recommendations for Researchers

  • 01

    Information to Disclose When Writing Research Papers

    - The generative AI tools used (e.g., ChatGPT 5, Claude 2.0)

    - When and how much they were used

    - Purpose of use (draft writing, expression improvement, data analysis, etc.)

    - Specific sections used (e.g., “Used ChatGPT 5.1 for writing the abstract”)

    - Citation format (follow journal guidelines)

  • 02

    Use of Generative AI Disclosure Statements

    Example of a Generative AI Disclosure Statement
    • This study used generative AI as follows:

      - ChatGPT 5.1 (OpenAI): improving the English abstract

      - Claude (Anthropic): assisting with the interpretation of data analysis results

      - The author assumes full responsibility, and all claims are those of the author.

  • 03

    Prohibited Research Use

    - Inputting confidential research data into public AI

    - Using AI-generated results without verification in papers

    - Listing AI as an author (authors must be human)

    - Reproducing existing research with AI and publishing as new work

  • 04

    Data Security

    - Use institution-approved or subscription-based AI tools such as Microsoft Research 365, ChatGPT Education version

    (These versions provide enhanced data protection and privacy.)

  • 05

    Precautions When Using Public AI

    - Be aware that input data may be used for future model training

    - Anonymization (removal of identifiable information) is mandatory

5. Recommendations for Administrative Staff

  • 01

    Use for Work Efficiency

    - Assisting with meeting minutes

    - Supporting draft writing of official documents

    - Summarizing and analyzing statistical data

    - Automating schedule management and notifications

  • 02

    Prohibited Uses

    - Inputting student personal information, faculty/staff information, financial data

    - Inputting confidential approval documents

    - Publishing AI-generated press materials without fact-checking

  • 03

    Principle of Transparency

    - Clearly disclose the use of AI in official university communications and decision-making

    - When AI is used in decision-making, explicitly indicate “AI-assisted”; final responsibility rests with the responsible officers

V. Usage Examples

1. Student Learning Activity Cases

01

Course Assignment

Example : Financial crisis analysis assignment in an economics course
Example of Prohibited Use
  • Student:

    “Explain the 2008 financial crisis.”

  • ChatGPT :

    [Generates a lengthy explanation]
    - The student submits the content as-is

Issue: Lack of citation and absence of actual learning

Example of Recommended Use
  • Step 1: Plan AI use

    “Use AI to help organize the causes of the financial crisis in my own words.”

  • Step 2: Ask AI

    “Briefly explain three specific causes of the 2008 financial crisis.”

  • Step 3: Verify

    Check against economics textbooks and news articles

  • Step 4: Personal analysis

    “The causes of the financial crisis are as follows: [my interpretation]
    According to ChatGPT (2025.10.15)… [citation and reference]”

  • Step 5: Critical review

    “The limitations of the AI explanation are… [my evaluation]”

Outcome: Transparency, verification, and clear accountability

02

Thesis Writing

Example :Master’s thesis writing
Example of Prohibited Use
  • Researcher:

    “Summarize the key points of relevant studies for the literature review.”

  • ChatGPT :

    [Generates content]
    - The researcher inserts the content into the literature review without verification

Issue: Errors in AI-generated references and potential hallucinations, including citations of non-existent studies

Example of Recommended Use
  • Step 1: Gather and review five relevant papers

     

  • Step 2: Ask AI

    “Summarize the key arguments of these five papers in one sentence each.”

  • Step 3: Verify

    Compare the AI summaries with the original texts

  • Step 4: Personal synthesis

    “The key issues in prior research are… To address these gaps… [my analysis]”

  • Step 5: Disclosure

    “Generative AI Disclosure: ChatGPT 5.1 used to assist with summarizing prior research b(Final verification conducted by the author) and to support interpretation of data analysis.”

Outcome: Transparency, verification, and clear accountability

2. Instructor Use Cases

01

Lecture Preparation

Example :Data science course
Usage Examples
  • 1. Assist with lecture material development

    - “Based on the learning objectives and topics for Week 3 of a big data analytics course, can you draft a 20-minute lecture outline?”

  • 2. Generate practice problems

    - “Create three Python exercises for beginners in financial data analysis.” → AI-generated content is reviewed, revised, and refined by the instructor

  • 3. Draft student feedback

    - “Summarize three strengths and three areas for improvement in this student’s paper.” → Final feedback is written by the instructor

Outcome: Reduced repetitive tasks and more time for high-quality feedback

02

Exam Development

Example :Ethics
Example of Prohibited Use
- “Based on the learning objectives and topics for Week 3 of a big data analytics course, can you draft a 20-minute lecture outline?”

Issue: Potential errors in ethical implications or contextual accuracy

Example of Recommended Use
  • Step 1: Ask AI

    “Create an essay exam question comparing utilitarianism and deontology.”

  • Step 2: Review AI-generated questions

    “Does it align with the learning objectives?”
    “Is it clear and fair?”
    “Is there any moral or political bias?”

  • Step 3: Revise and finalize

    The instructor reviews, edits, and finalizes the question

  • Step 4: Indicate in eCampus, exam materials, or the syllabus:

    “This question was developed with AI assistance and has been reviewed and finalized by the instructor.”

Outcome: Transparency and quality assurance

03

Research Use

Example : Literature search and organization in microbiology and immunology research
Usage Examples
  • Step 1: Search and collect papers

    Retrieve and collect 20 relevant papers from academic databases

  • Step 2: Ask AI

    “Can you organize the research methods and key findings of the following papers into a table? [List of titles and DOIs]”

  • Step 3: Verify AI-generated output

    Compare with original papers and correct any errors

  • Step 4: Disclosure

    “Use of generative AI: ChatGPT 5.1 was used to assist in creating a summary table of prior research (all papers were verified by the author)”

Outcome: Time efficiency and systematic organization

04

Data Analysis

Example : Data analysis in sociological research
Example of Prohibited Use
- Entering respondents’ personal data and survey responses into public AI systems

Issue: Risk of personal data leakage and violation of research ethics regulations

Example of Recommended Use
  • Step 1: Use institution-approved tools

    Use Microsoft Azure (secure data environment) or the university-subscribed ChatGPT 5.1

  • Step 2: Anonymize data

    Remove personally identifiable information (e.g., name → ID, age → age group)

  • Step 3: Ask AI (using anonymized data)

    “Analyze the key trends and correlations in this dataset.”

  • Step 4: Verify

    Recheck AI-generated results using statistical software and assess validity

  • Step 5: Disclosure

    “Use of generative AI: ChatGPT was used to support interpretation of data analysis, and results were verified using SPSS. Only anonymized data were provided to the AI to ensure privacy protection.”

Outcome: Transparency and quality assurance

TOP