AI Use Guidance
Guidelines for AI Use at Baruch College
Developed by the AI Think Tank Governance and Operations Subcommittee and the input of the campus community
March 11, 2024
Note: Guidelines will be updated as AI technology evolves. Thus, this is a living document.
I. General Principles
Technologies assist people in their work, problem-solving, and day-to-day lives. Technologies enhance our capacities as teachers, students, scholars, and administrators. The rapid development of generative AI has begun and will continue to fundamentally alter the way we find, use, create, and disseminate knowledge and information (including data). These policies are intended to assist the Baruch College community in using artificial intelligence tools effectively and ethically.
The guidelines draw from the White House “Blueprint for an AI Bill of Rights” and the European Union’s AI Act. The key guideline for the use of generative AI is simple:
- Disclose that the content was generated by AI while providing details when appropriate (for example, the prompts used to generate content)
Course materials are covered by copyright and as such should not be used to train a large language model (LLM) without the express consent of the copyright owner (for course materials, this is usually the faculty member).
II. Teaching and Learning
The easy accessibility of generative AI (GenAI) tools means that students, faculty, and staff are already using AI tools in their work. (Examples of GenAI Tools include MS Copilot, ChatGPT, Google Gemini.) Further, students will be graduating into a workforce that uses AI of all kinds and perhaps even creates AI. Thus, as an educational institution, there is an obligation to provide students with opportunities to learn to use AI effectively and ethically.
- Syllabi. Faculty are encouraged to integrate artificial intelligence education into their
courses as appropriate to their disciplines, but are not obligated to do so. All syllabi
should include a course- and discipline-appropriate policy on the use of generative
AI.
i. Such course policy should set clear expectations about the use of GenAI tools and should provide rationale for how and when they can be used in the course. For example, GenAI tools can be helpful in other contexts by encouraging critical thinking and improving the quality of the deliverables for other assignments, yet for some assignments, it may be appropriate to prohibit the use of GenAI tools when the assignment is meant to assess learning of foundational concepts. (Examples can be found on the CTL’s AI Resource Page) - Assignments. Assignments in which generative AI is integrated or allowed should clearly state the parameters of such use. These parameters should include:
i. Disclosure that the content was generated by AI
ii. Student responsibility to verify the accuracy of AI-generated content
iii. Documentation of prompts used and other relevant inputs
iv. AI should not be used to generate illegal or inappropriate content
v. AI should only be used when allowed by the course instructor - Academic Integrity. At all times, students must adhere to the institution’s academic integrity rules. Generative AI is allowed only where given explicit permission from the instructor. Clearly stated course policy and assignment guidance will be used in determining if the rules of academic integrity have been violated.
III. Research
Artificial intelligence applications are commonly used in quantitative analysis, coding, and other research contexts. A recent article notes, “In the near term, generative AI does seem to offer opportunities to enhance specific areas of research, namely (i) problem formulation and research design, (ii) data collection and analysis, (iii) interpretation and theorization,
and (iv) composition and writing.”1 There is nevertheless an expectation that the final published results of a study in any discipline will be the original work of the authors. Thus, there is an expectation that the following guidelines will be followed:
- Disclosure of methods
i. Much as the methodology of any research study should be disclosed, the use of AI tools in problem formulation, research design, data collection and analysis, and interpretation should be disclosed in the research output and documented where appropriate. - Disclosure of generation of text and materials
i. The use of a generative AI large language model (LLM) tool to write text or create images should be disclosed
1. Include platform used
2. Document relevant inputs (including prompts) as well as outputs from the LLMs
3. Include date of generation, as the replicability will evolve as the LLM increases in sophistication - Ethical use. All researchers must follow guidelines specified by the institutional IRB.
Further, per the White House AI Bill of Rights, all research subjects should be
protected from:
i. unsafe systems
ii. algorithmic discrimination
iii. sharing of private information.
IV. Operations
Artificial intelligence tools are already used in some campus operations, primarily for data analysis. As these tools improve and are supplemented by generative AI tools, similar guidelines apply to those outlined above.
- Disclosure. The use of AI tools to analyze data and generative AI tools to write text or create images should be disclosed
i. Include platform
ii. Include date of generation or use, as the replicability will evolve as the models increase in sophistication - Ethical use. Per the White House AI Bill of Rights, all members of the Baruch community should be protected from:
i. unsafe systems
ii. algorithmic discrimination
iii. sharing of private information.
Helpful links:
MLA Guide to citing AI
Student Guide to AI
APA “How to Cite Chat GPT”
Chicago Manual of Style Guidance on Citing AI
1 Anjana Susarla, Ram Gopal, Jason Bennett Thatcher, Suprateek Sarker (2023) The Janus Effect of Generative AI:
Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. Information Systems
Research 34(2):399-408.
https://doi.org/10.1287/isre.2023.ed.v34.n2