Higher education institutions must consider the implications of generative AI on academic integrity, as 75% of students admit to using AI tools to complete assignments. This raises concerns about the authenticity of student work and the potential for cheating.
To address these concerns, institutions can implement policies that require students to disclose when they've used AI tools in their work. This can help educators accurately assess student understanding and skills.
Institutions can also use generative AI tools to detect and prevent plagiarism, as AI-generated content can be identified through its unique linguistic patterns. By leveraging AI to detect AI-generated content, educators can maintain the integrity of academic work.
Institutions should also consider the potential benefits of generative AI, such as its ability to assist students with disabilities or language barriers.
Suggestion: What Are the Generative Ai Tools
Policy Considerations
Revalidation of output is required for generative AI models, as they can "hallucinate" or make up false information. This means that their responses could change over time, and it's essential to periodically re-validate their output to ensure accuracy.
Recommended read: Why Is Controlling the Output of Generative Ai Systems Important
To mitigate potential risks, a solid generative AI policy should include guidelines on acceptable use, data protection, and intellectual property safeguarding. This will help prevent misuse and ensure that users are aware of the potential risks involved.
A robust policy should also outline disciplinary actions for policy violations and provide clear instructions on how to obtain necessary approvals for original works and intellectual property. This will help maintain a culture of transparency and accountability within the organization.
Here are some key considerations to include in a generative AI policy:
- Revalidation of output: Regularly check the accuracy of generative AI responses to prevent false information.
- Original work and intellectual property: Obtain necessary approvals before inputting original works or copyrighted material into generative AI tools.
- Informed consent: Inform users when they are using generative AI tools and encourage them to make informed decisions about the generated content.
Policy Example
Developing a solid generative AI policy is crucial for any organization. It's essential to include a clear definition of generative AI in the policy.
Generative AI is a type of AI that can create new content, such as images, videos, or text. A good policy should specify what types of generative AI are allowed and which are not.
It's also important to include guidelines for data usage and ownership in the policy. This will help prevent data breaches and ensure that data is used responsibly.
A generative AI policy example highlights the importance of setting clear expectations for AI usage.
Worth a look: Generative Ai Policy Template
Policy
Developing a comprehensive policy is crucial when it comes to generative AI use in higher education. A solid policy should include what is and isn't acceptable use, as well as a robust explanation of the company's stance on data protection and information on privacy laws.
It's essential to mitigate potential risks with a generative AI policy. A good policy should also include disciplinary actions that may be taken if the policy is violated, and how to safeguard intellectual property.
A generative AI policy should be flexible and adaptable to the rapidly evolving AI technologies. Standardized, one-size-fits-all AI policies are not sustainable in the long term.
Here are some key considerations for a generative AI policy:
- What is and isn’t acceptable use
- A robust explanation of the company’s stance on data protection and information on privacy laws
- Disciplinary actions that may be taken if the policy is violated
- How to safeguard intellectual property
Compliance with other university policies is also crucial. Users must comply with all relevant Wesleyan University policies, as well as applicable state and federal laws, when using generative AI tools. This includes policies related to academic integrity and the honor code, copyright and intellectual property, and data governance and security.
It's also essential to involve all stakeholders in the development of the policy, including students, faculty, administrators, IT leaders, researchers, staff, and external groups.
Data Management
Data Management is crucial when working with generative AI. You need to ensure your data is handled properly to avoid any issues.
To determine if your data requires special attention, check Northwestern's Data Classification Policy. If your data is Level 1 (non-confidential and public data), you're good to go. You can upload it to generative AI tools without any issues.
If your data is above Level 1, any generative AI tool must have been approved through Northwestern IT's procurement and security review processes. This ensures that your sensitive data is protected.
Here's a table outlining Northwestern's current services posture based on data classification:
Data Approved
To determine whether your data requires special attention, consult Northwestern's Data Classification Policy. This policy helps you understand the level of sensitivity associated with your data.
If your data is Level 1 (non-confidential and public data), uploading it to generative AI tools is permissible. You can use publicly available tools like ChatGPT, Bing Chat, or Bard/Gemini for Level 1 data.
Suggestion: Top Generative Ai Tools
For sensitive data, such as Level 2, Level 3, or Level 4, you need to use approved generative AI tools. These tools have been validated by Northwestern IT's procurement and security review processes.
Here's a summary of Northwestern's current services posture based on data classification:
Types of Data in Software
When handling sensitive data in generative AI software, it's essential to understand the types of data that can be used. Only information classified as "Public" may be entered into publicly available generative AI tools.
Publicly available generative AI tools can only accept data classified as "Public". This means researchers and students must be mindful of the data they're working with.
Information classified as "Confidential" can be entered into generative AI tools that have been reviewed and approved by the relevant authorities. Users must ensure the tool has been vetted before uploading confidential data.
Researchers and students can collect, use, and share information as part of their research with these tools. However, they must review the terms and conditions of the software's user agreement to ensure compliance with data protection and privacy laws.
Any information classified as "Restricted" requires explicit written approval from the Chief Information Security Officer and the relevant Cabinet member before being entered into a generative AI tool.
See what others are reading: Generative Ai Software Development
Academic Use
Academic Use is governed by the Provost's Committee on Generative AI at Northwestern University, in collaboration with Northwestern IT. This committee provides guidance on generative AI tools and their impact on teaching and learning.
Stanford University offers a clear example of acceptable AI use in their policy: "Using generative AI tools to substantially complete an assignment or exam is not permitted." Students should acknowledge the use of generative AI and disclose any assistance when in doubt.
Universities like Duke and Yale are taking a flexible approach to AI policies, recognizing that standardized policies may not be sustainable in the long term. They're focusing on understanding the issues at stake for faculty, students, staff, and administrators.
Here are some key considerations for academic use policies:
- Governance: Address ethics, equity, and accuracy in generative AI use.
- Pedagogy: Encourage professors to establish clear and specific generative AI guidance for their courses.
- Operations: Consider the technical training and support required for AI infrastructures.
Academic Use at Northwestern
At Northwestern University, the use of generative AI for teaching and learning purposes is governed by the Provost's Committee on Generative AI, in tandem with Northwestern IT. Guidance on generative AI tools and their impact on teaching and learning can be found on the Office of the Provost website.
Developing a generative AI use policy can feel daunting, but starting small and figuring out what works is a good approach. Jenay Robert, an EDUCAUSE senior researcher, suggests starting with the basics, such as giving students clear guidance on acceptable AI use.
Stanford University provides a clear example of acceptable AI use: "Using generative AI tools to substantially complete an assignment or exam is not permitted. Students should acknowledge the use of generative AI and default to disclosing such assistance when in doubt."
Universities like Northwestern need to consider multiple perspectives when crafting AI policies, including governance, pedagogy, and operations. Generative AI policies should address ethics, equity, and accuracy, as well as provide guidance on acceptable AI use in the classroom.
Here are the three areas of focus for higher education leaders when crafting AI policies, as recommended by EDUCAUSE:
- Governance: Address ethics, equity, and accuracy issues.
- Pedagogy: Establish clear and specific generative AI guidance for courses.
- Operations: Account for technical training and support needs.
Academic Integrity
Academic Integrity is a major concern with the rise of generative AI. Generative AI makes it easy for students to create text that seems like a human wrote it, but tools for detecting AI-generated text are notoriously unreliable.
Consider reading: Telltale Words Identify Generative Ai Text
To address this issue, some experts recommend changing the design and structure of assignments, which can substantially reduce students' likelihood of cheating and enhance their learning, as Yale University notes.
Stanford University takes a clear stance on acceptable AI use, stating that using generative AI tools to substantially complete an assignment or exam is not permitted, and students should acknowledge the use of generative AI and default to disclosing such assistance when in doubt.
One solution is to invite students to request a draft of the assignment from ChatGPT, then facilitate a discussion to analyze how the drafts compare, as Princeton University suggests.
This approach helps students become good digital citizens, as EDUCAUSE senior researcher Jenay Robert points out, and can tie back to policy in some way.
Here are some key considerations for academic integrity in the age of generative AI:
- Change the design and structure of assignments to reduce cheating and enhance learning.
- Provide clear guidance on acceptable AI use, such as Stanford University's policy.
- Facilitate discussions to analyze how drafts compare, as Princeton University suggests.
- Emphasize the importance of digital citizenship and responsible AI use.
Higher Education Use Policy
Developing a generative AI policy for higher education can be a daunting task, but it's essential to mitigate potential risks and ensure a smooth transition.
Guidance on generative AI tools and the impact on teaching and learning can be found on the Office of the Provost website at Northwestern University, which governs the use of generative AI for teaching and learning purposes.
Start small, figure out what does and doesn’t work, and build from there, advises Jenay Robert, an EDUCAUSE senior researcher. This means beginning with the basics, such as giving students clear guidance on acceptable AI use.
A solid generative AI policy should include what is and isn’t acceptable use, a robust explanation of the company’s stance on data protection and information on privacy laws, disciplinary actions that may be taken if the policy is violated, and how to safeguard intellectual property.
The three main areas of focus for higher education generative AI policies, as advised by Robert, are governance, pedagogy, and operations. Governance addresses issues of ethics, equity, and accuracy, while pedagogy involves establishing clear and specific generative AI guidance for courses. Operations account for technical training and support, and implementing AI to make operations more efficient and effective.
Here are the four levels of AI policy development activities recommended by EDUCAUSE:
- Individual: Engage students and faculty to find out how they use generative AI and how they feel about ethics and the impact on learning.
- Department or unit: Assess the role of generative AI in academic programs and find common ground between departments.
- Institution: Establish an AI governing body for oversight and guidelines that foster equity and accuracy.
- Multi-institution: Consult with other universities and private sector organizations to find out how they are handling generative AI challenges.
To create a successful generative AI policy, get all your stakeholders involved, talk to students, faculty, administrators, IT leaders, researchers, staff, and external groups, including vendors and technology partners.
Sources
- Northwestern Guidance on the Use of Generative AI (northwestern.edu)
- https://huit.harvard.edu/ai/guidelines (harvard.edu)
- https://www.boisestate.edu/policy/generative-artificial-intelligence-ai-use-and-policies/ (boisestate.edu)
- https://its.uchicago.edu/generative-ai-guidance/ (uchicago.edu)
- https://it.wisc.edu/generative-ai-uw-madison-use-policies/ (wisc.edu)
- AI Policy - Taylor & Francis (taylorandfrancis.com)
- McKinsey & Co (mckinsey.com)
- time to build one (educause.edu)
- 2024 EDUCAUSE AI Landscape Study (educause.edu)
- any AI-related acceptable use policies (educause.edu)
- Click the banner to learn how an experienced partner can help colleges navigate the new AI world. (cdw.com)
- Stanford University (stanford.edu)
- Duke University (duke.edu)
- this report from the University of Kansas Center for Teaching Excellence (ku.edu)
- Princeton University builds on the concept (princeton.edu)
- 2024 EDUCAUSE Action Plan: AI Policies and Guidelines (educause.edu)
- occasionally causing hallucinations (ibm.com)
Featured Images: pexels.com