Skip to content
    • BaruchHUB
    • Bearcat Bookstore
    • Blackboard
    • Brightspace
    • Blogs@Baruch
    • Consumer Information
    • CUNYBuy
    • CUNYfirst
    • Degree Works
    • ePAF
    • Faculty & Staff
      • Email
      • Office 365
      • Dropbox
    • Interfolio
    • MyInfo
    • SmartEvals
    • Student Email
    • Time and Leave
      • Full Time
      • Part Time
    • Zoom
  • Calendar
  • Directory
  • Library
  • President's Office
  • News Center
  • Technology
  • Donate
Baruch college | Baruch College-logo Baruch College-logo City University of New York CUNY-logo
Menu

About Baruch
Admissions
Academics
Arts
Athletics
Students
Alumni
Office of the Provost
  • Provost Home
  • Artificial Intelligence Think Tank
  • AI Use Guidance
  • Division of Academic Affairs
  • Office of the Provost
  • Provost Initiatives
    • Artificial Intelligence Think Tank
    • Faculty Grants and Awards for 2025-2026
    • Mental Health Awareness Week 2026
  • Leadership
  • Faculty Affairs, Research & Innovation
    • Academic Administration
      • Reappointment, Tenure, CCE, and Promotion
      • Baruch College Faculty P&B Guidelines
      • Department Chairs and Faculty Personnel Committees
        • List of Department Chairs
        • Representatives to Faculty Personnel Committes
      • Faculty Workload Reporting and Compliance
      • Faculty Leaves
      • New Faculty
      • International Faculty
      • Faculty Appointments
    • Faculty Affairs
      • Faculty Convocation
    • Research
    • Faculty Handbook
  • Learning and Student Success
    • Experiential and Community Engaged Learning (ExCEL)
    • College Agreements
    • Academic Honesty
    • Academic Support for Students
    • Support for Faculty
  • Pedagogy & Online Programs
    • National Council for State Authorization Reciprocity Agreements (NC-SARA)
  • Assessment, Accreditation & Institutional Effectiveness
    • Institutional Effectiveness
    • Data-driven Decision Making
    • Program Review
    • Assessment
      • Assessment Spotlight Series
    • MSCHE Institutional Accreditation
      • Interview with MSCHE Liaison, Dr. Tiffany Lee
    • Specialized Accreditation
    • NYSED – Academic Program Registration
    • National Collegiate Athletic Association (NCAA)
  • Communications Archive
    • From the Provost
    • From the Associate Provost for Teaching, Learning, and Student Success
    • From the Associate Provost for Academic Admin. and Research

AI Use Guidance

Guidelines for AI Use at Baruch College

Developed by the AI Think Tank Governance and Operations Subcommittee with input of the campus community
Update August 19, 2025

Note: Guidelines will be updated as AI technology evolves. Thus, this is a living document.

I. General Principles

Technologies assist people in their work, problem-solving, and day-to-day lives. Technologies enhance our capacities as teachers, students, scholars, and administrators. The rapid development of generative AI has begun and will continue to fundamentally alter the way we find, use, create, and disseminate knowledge and information (including data). These guidelines are intended to assist the Baruch College community in using artificial intelligence tools effectively and ethically.

Baruch College uses CoPilot as its default generative AI tool because, as specified in the CUNY Microsoft 365 contract, CoPilot will not share data input by CUNY users beyond the CUNY environment. In other words, when we use CoPilot, we are training only the CUNY instance of CoPilot and not its underlying model.

The key guideline for the use of generative AI is a simple one: disclose.

  • Disclose that content was generated by AI while providing details when appropriate (for example, the prompts used to generate content).
  • Disclose if AI was used to aid in decision-making, including the platform, prompt, and date.

IP and Data Security. Course materials are covered by copyright and as such should not be used to train a large language model (LLM) without the express consent of the copyright owner (for course materials, this is usually the faculty member). Faculty and staff are thus advised to use caution when uploading their intellectual property to Gen AI LLMs outside of the CUNY Copilot environment. Similarly, institutional data may not be used with Gen AI platforms outside of the CUNY CoPilot environment.

Gen AI “has no stake in the knowledge it produces and is thus likely prone to offering irresponsible outputs.”[1] Thus AI outputs must be confirmed by human critical thinkers before using it in operations, teaching, learning, or research.

II. Teaching and Learning

The easy accessibility of generative AI (GenAI) tools means that students, faculty, and staff are already using AI tools in their work. (Examples of GenAI Tools include MS Copilot, ChatGPT, Google Gemini.) Further, students will be graduating into a workforce that uses AI of all kinds and perhaps even create AI tools themselves. Thus, as an educational institution, there is an obligation to provide students with opportunities to learn to use AI effectively, responsibly, and ethically.

  1. Syllabi. Faculty are encouraged to integrate artificial intelligence education into their courses as appropriate to their disciplines, but are not obligated to do so. All syllabi should include a course- and discipline-appropriate policy on the use of generative AI. Faculty should clearly indicate on their syllabus when, whether, and to what extent AI use is allowable in a class. This should specify whether AI tools are allowable for use in editing (e.g. Grammarly) even if disallowed for content generation (e.g. ChatGPT, CoPilot, Gemini, etc.). Faculty should also remind students of these guidelines and expectations throughout the semester. To reduce student confusion about what is permissible across multiple classes, faculty should share their expectations clearly and frequently. Possible sanctions should also be listed on the course syllabus (eg, reduction in points or the grade on an assignment, failure on an assignment, or for egregious cases, failure of a course). Sample syllabus policies are available on the AI Resource Hub. Such course policy should set clear expectations about the use of GenAI tools and should provide rationale for how and when they can be used in the course. For example, GenAI tools can be helpful in some contexts by encouraging critical thinking and improving the quality of the deliverables for other assignments, yet for other assignments, it may be appropriate to prohibit the use of GenAI tools when the assignment is meant, for example, to assess learning of foundational concepts. (Sample syllabus policies are available on the AI Resource Hub.)
  2. Assignments in which generative AI is integrated or allowed should clearly state the parameters of such use. These parameters should include:
    i. Disclosure that the content was generated by AI
    ii. Student responsibility to verify the accuracy of AI-generated content
    iii. Documentation of prompts used and other relevant inputs
    iv. AI should not be used to generate illegal or inappropriate content
    v. AI should only be used when allowed by the course instructor
  3. Student Learning: Learning at the university level emphasizes the why (rationale and the history of arguments) and the how (constructing good arguments, evaluating sources, engaging in iterative problem solving) in addition to the what (specific content). While using GenAI tools can help students significantly by making available the specific content (what) easily, it can obstruct learning by obscuring the why and the how.  Hence, faculty should consider how GenAI tools can help and/or hinder the learning process in their own domain.[2] As distinct from guidance about the use of AI for assignments, faculty are encouraged to include in their syllabi or other course materials suggestions for ways to use AI as a learning assistant to achieve mastery of course content (for example, how to use GenAI tools iteratively to focus on the how and why in addition to the what).[3]
  4. Academic Integrity. At all times, students must adhere to the institution’s academic integrity policy. Generative AI is allowed only where given explicit permission from the instructor. Clearly stated course policy and assignment guidance will be used in determining if the rules of academic integrity have been violated. See this guide to Academic Integrity and AI, available on the AI Resource Hub.
  5. Using AI for student assessment, grading, and feedback: Faculty should take great care in using AI for student assessment. Faculty are expected to ensure that they are always actively engaged in the learning assessment (even while using AI to assess student work). AI output must be verified by the faculty of record. Faculty must ensure that they use authorized tools (MS CoPilot) before uploading original content from students. They must also disclose how AI is being used in grading students and must address student concerns about specific use cases.
  6. If faculty want to use another Gen AI tool they should consult with their department chair and BCTC about the cyber security of such tools. Department chairs in collaboration with BCTC will review available tools and additional appropriate tools or platforms for the institutional in-take process for procurement.

 

III. Research

Artificial intelligence applications are commonly used in quantitative analysis, coding, and other research contexts. A 2023 article notes, “In the near term, generative AI does seem to offer opportunities to enhance specific areas of research, namely (i) problem formulation and research design, (ii) data collection and analysis, (iii) interpretation and theorization, and (iv) composition and writing.”[4] There is nevertheless an expectation that the final published results of a study in any discipline will be the original work of the authors. Thus, there is an expectation that the following guidelines will be followed:

  1. Disclosure of methods
    Much as the methodology of any research study should be disclosed, the use of AI tools in problem formulation, research design, data collection and analysis, and interpretation should be disclosed in the research output and documented where appropriate.
  2. Disclosure of generation of text and materials
    The use of a generative AI large language model (LLM) tool to write text or create images should (must?) be disclosed
    1. Include platform used
    2. Document relevant inputs (including prompts) as well as outputs from the LLMs
    3. Include date of generation, as the replicability will evolve as the LLM increases in sophistication
  3. Ethical use. All researchers must follow guidelines specified by the institutional IRB. Further, all research subjects should be protected from:
    i. unsafe systems
    ii. algorithmic discrimination
    iii. sharing of private information.
  4. Finally, may funding agencies have adopted specific AI use guidelines that must be adhered to in the application and subsequent funded research process.

IV. Operations

Artificial intelligence tools are already used in some campus operations, primarily for data analysis. As these tools improve and are supplemented by generative AI tools, similar guidelines apply to those outlined above.

  1. The use of AI tools to analyze data and generative AI tools to write text or create images should be disclosed
    i. Include platform
    ii. Include date of generation or use, as the replicability will evolve as the models increase in sophistication
  2. Ethical use. All members of the Baruch community should be protected from:
    unsafe systems
    ii. algorithmic discrimination
    iii. sharing of private information. To this end, it is critical that personal and confidential student and employee data be kept within the CUNY Microsoft Co-Pilot environment.
  3. AI-assisted meeting recordings and summaries: All meeting attendees should be given the option to accept or deny the recording of meetings or use of AI-generated meeting summaries.

Allegations of misuse will be investigated by the appropriate College or University body (eg., Office of Dean of Students, Office of Academic Affairs).

V. AI Tool Availability

Microsoft Copilot is the only GenAI platform licensed for use at the University. There are 2 tiers of licenses available:

  1. Microsoft 365 Copilot Chat
    1. Licensed for all faculty, staff, and students
    2. Web-based chat that shows results from the internet
  2. Microsoft 365 Copilot
    1. Work-based chat grounded in your enterprise data[5], [6]
    2. Access Microsoft 365 Copilot in apps such as Teams, Outlook, Word, PowerPoint, and Excel
    3. Create and use agents with Copilot Studio

Use of Microsoft 365 Copilot and Microsoft 365 Copilot Chat involves prompts (entered by users) and responses (content generated by Copilot). Both prompts and responses are protected by Microsoft’s Enterprise data protection (EDP) controls, ensuring:

  1. Your data is secure
  2. Your data is private
  3. Your data isn’t used to train foundation models

References related to the Microsoft license:

https://www.microsoft.com/en-us/microsoft-365/copilot

https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-licensing

https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-ai-security

https://learn.microsoft.com/en-us/copilot/microsoft-365/enterprise-data-protection


[1] Lindebaum and Fleming (2024) ChatGPT Undermines Human Reflexivity, Scientific Responsibility and Responsible Management Research, British Journal of Management, Vol. 35, 566–575

[2] Adapted from McMaster University, “Guidelines on the Use of Generative AI in Teaching and Learning”

[3] For further reading and suggested prompts, see Mollich and Mollich (2023), “Assigning AI: Seven Approaches for Students with Prompts”

[4] Susarla, Gopal, Thatcher, and Sarker (2023) “The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. Information Systems Research 34(2):399-408.

[5] In the context of Artificial Intelligence, grounding data refers to the process of providing an AI model with specific, relevant information, often from external sources, to help it generate accurate and contextually appropriate responses.

[6] Microsoft 365 Copilot accesses resources on behalf of the user, so it can only access resources the user already has permission to access. If the user doesn’t have access to a document for example, then Microsoft 365 Copilot working on the user’s behalf will also not have access either.


  • Contact Us
  • About Our Site
  • Privacy
  • Site Map
  • Text Only
Baruch College | One Bernard Baruch Way
55 Lexington Avenue (at 24th Street) | New York, NY 10010
646-312-1000
CUNY logo
CUNY logo