Skip to Ramapo College Policies, Procedures, Statements site navigationSkip to main content

Use of Artificial Intelligence (AI) in the Workplace

Section:400
Section Title:Administration and Finance
Policy Number:414
Policy Name:Use of Artificial Intelligence (AI) in the Workplace
Approval Authority:President's Senior Leadership Team
Responsible Executive:Vice President with Oversight of Information Technology Services
Responsible Unit:ITS, POER
Date Adopted:December 5, 2025
Policy

Policy Statement

Ramapo College of New Jersey (hereafter “RCNJ” ) is committed to the responsible, ethical, and transparent use of Artificial Intelligence (AI) technologies in the workplace. While AI tools may be used to enhance efficiency, decision-making, and innovation; their use must align with RCNJ’s core values of integrity, accountability, fairness, and respect for privacy.

RCNJ employees are required to use AI in ways that: comply with all applicable laws, regulations, and organizational policies; avoid harm, bias, or discrimination; protect confidential and personal information; support—not replace—human judgment in sensitive or high-impact decisions. Further, AI use must be documented, monitored, and periodically reviewed to ensure continued alignment with these standards.

This policy articulates and complies with the security control requirements stated in the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) and its supporting NIST Special Publication (SP) 800-171, and applicable laws, regulations, and best security practices.

Reason for Policy

To provide clear guidelines for employees regarding the responsible and ethical use of Artificial Intelligence (AI) technologies, including but not limited to generative AI (also known as GenAI) programs such as ChatGPT, within the College.

To Whom does the Policy Apply

This policy applies to all employees, contractors, and third-party vendors who utilize AI tools and technologies in the course of their work with Ramapo College of New Jersey. Additionally, the policy encompasses all systems and information owned, managed, or processed by RCNJ and its authorized employees for non-instructional, business, or support purposes. It also extends to any external or non-RCNJ systems that interconnect with or exchange data with RCNJ-managed systems.

Supplemental Resources

Procedure

I. Definitions

Employee. For the purposes of this Procedure, the term “employee” refers to all individuals who work for Ramapo College in exchange for financial or other compensation, and the term “employee” includes all part-time and full-time staff, faculty, adjuncts, managers, and student workers.

Artificial Intelligence. As defined by IBM, “Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy.”

II. Scope

a. Controls. This policy and procedure are intended to address the requirements of National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF), its supporting NIST Special Publication (SP) 800-171, and the security controls contained therein. Specifically, this policy addresses compliance with the following NIST CSF categories and subcategories relevant to the responsible use and governance of artificial intelligence systems:

Identify (ID):

    • ID.AM-1: Maintain an inventory of all AI tools and systems used within the institution, documenting ownership, purpose, and associated data.
    • ID.RA-1: Conduct risk assessments to evaluate the potential impact of AI systems on privacy, fairness, and security.
    • ID.BE-4: Ensure alignment of AI system usage with institutional objectives and regulatory requirements.

Protect (PR):

    • PR.AT-1: Conduct security and ethical awareness training for personnel managing AI systems.

Respond (RS):

    • RS.RP-1: Develop and implement incident response plans specifically for AI systems, including handling ethical or student conduct violations.

b. Why These Controls Are Relevant.  Identify (ID) helps educate on AI use cases and their potential risks, ensuring a clear understanding of system dependencies and compliance. Protect (PR) ensures the safeguarding of sensitive data and establishes security baselines for AI tools. Respond (RS) outlines how to mitigate and communicate risks associated with AI use, whether ethical or operational.

c. Educational or Instructional Use. This policy and procedure do NOT apply to, preempt, or supersede any academic policies that apply to faculty or students regarding the educational or instructional use of AI.

d. Exceptions to Policy. All exceptions to this policy must be approved by ITS Leadership and documented accordingly. Exceptions may be granted under the following circumstances:

    • Activities governed by academic or research policies;
    • Instances where compliance conflicts with principles of academic freedom; or
    • Emergency or temporary uses of AI systems.

See Section IV.o. for additional information regarding exceptions.

III. Rules

The following rules must be followed when using AI on College systems or networks:

a. IMPORTANT: Do not upload or input any confidential, proprietary, or sensitive College or student information into any GenAI tool! 

    • Examples include passwords and other credentials, Protected Health Information (PHI), data outlined as moderate or high risk in the RCNJ Policy/Procedure 410: Data Protection (PII), personnel material, information from documents marked Confidential, Sensitive, or Proprietary, or any other nonpublic College information that might be of use to malicious entities or harmful to the College if disclosed. Failure to comply with this policy may breach your or the College’s obligations to keep certain information confidential and secure, risks widespread disclosure, and may cause the College’s rights to that information to be challenged.

b. Verify that any response from a GenAI tool that you intend to rely on or use is accurate, appropriate, not biased, not a violation of any other individual or entity’s intellectual property or privacy, and consistent with RCNJ policies and applicable laws.

c. Do not use GenAI tools to make or help you make personnel decisions about applicants or employees, including recruitment, hiring, retention, promotions, transfers, performance monitoring, discipline, demotion, or terminations.

d. Do not upload or input any personal information (names, addresses, likenesses, etc.) about any person into any GenAI tool.

e. Do not represent work generated by a GenAI tool as being your own original work.

f. Do not integrate any GenAI tool with internal College software without first receiving specific written permission from your supervisor and the ITS Department.

g. If you are unsure if a tool is GenAI, seek the counsel of ITS prior to using it.

IV. Guidelines

The responsible use of AI can significantly enhance organizational capabilities and improve efficiency. By adhering to this policy and procedure, employees contribute to a positive and innovative workplace culture while ensuring that our use of AI aligns with the College’s core values and ethical standards. Guidelines for use include:

a. Appropriate Use of AI. GenAI tools can be valuable for enhancing productivity, streamlining processes, and supporting decision-making; however, they are not a substitute for human judgment and creativity. The output from these tools is often prone to inaccuracies, outdated information, or false responses, making careful human verification essential. Employees must critically evaluate AI-generated suggestions or plans using their knowledge of the College’s values, policies, procedures, and strategies, while also collaborating with colleagues to gain different perspectives and reduce the risk of errors. AI tools should be used to supplement, not replace, traditional methods of problem-solving and decision-making, with appropriate validation such as cross-referencing information, performing tests when feasible, or consulting experts. Additionally, employees must treat any information shared with AI tools as if it could go viral on the Internet and be attributed to them or the College, regardless of tool settings or assurances from its creators. By using AI responsibly and maintaining human oversight, we can optimize its benefits while minimizing risks.

b. Data Privacy and Security. Employees must adhere to all data privacy and security protocols when using AI technologies. This includes ensuring that any data input into AI systems complies with our data protection policies and relevant legal regulations. Sensitive or confidential information, including student data, pre-decisional work, negotiations, personal details, or any data classified as moderate- to high-risk as outlined in RCNJ Policy/Procedure 410: Data Protection (PII), must never be shared with AI tools, as these tools learn and generate content based on the input data. Users should ensure that any data input complies with College policies and legal regulations, preserving data security, intellectual property, and confidentiality. If unsure whether specific information is appropriate to use with the AI tool, employees should consult their supervisor, the ITS department, the College’s internal auditor, or the legal department. Non-compliance with data protection policies and legal regulations may result in disciplinary action.

c. Risk Assessments for AI Usage. In the course of using AI tools, employees should always be aware of the inherent risks these technologies pose. These may include potential inaccuracies or misinterpretations in AI-generated content due to lack of context, legal ambiguities concerning content ownership, and possible breaches of data privacy. As such, a critical attitude towards AI outputs is required at all times. To ensure that risks associated with AI usage are effectively managed, it is the responsibility of management to incorporate AI-specific risk assessments into the College’s broader risk management procedures. This includes continually evaluating and updating protocols to identify, assess, and mitigate potential risks, with considerations for changes in AI technology, its application, and the external risk environment. This also necessitates periodic training and awareness sessions for employees to ensure they stay informed about these risks and the steps needed to mitigate them.

d. Use of Third-Party AI Platforms. Employees should exercise caution when using third-party AI platforms due to the potential for security vulnerabilities and data breaches. Before using any third-party AI tool, employees are required to verify the security of the platform. This can be done by checking for appropriate security certifications, reviewing the vendor’s data handling and privacy policies, and consulting with the College’s ITS cybersecurity team if necessary. Moreover, DRAFTdata shared with third-party platforms must comply with the guidelines outlined in the section on Data Privacy and Security. In situations where employees are unsure about the use of a third-party platform, they should seek guidance from their supervisors or the ITS security team. Employees should not integrate any AI tool with software provided by or maintained by the College without first receiving specific written permission from their supervisor and the ITS Department.

e. Use in Communications. AI tools, when used appropriately, can aid in facilitating efficient internal communication within Ramapo College. This includes drafting emails, automating responses, or creating internal announcements. However, while using AI for these purposes, it is crucial that employees adhere strictly to the College’s policies on ethics, harassment, discrimination, and professional conduct. AI-generated communication should be respectful, professional, and considerate, mirroring the high standards of interpersonal communication expected at Ramapo College. Any misuse of AI tools for communication, including any language or behavior that violates College policies, will be treated as a serious violation and may lead to disciplinary action.

f. Transparency and Accountability. Employees should maintain transparency in their use of AI tools. When AI-generated outputs are utilized in decision-making processes, employees should not represent work generated by an AI tool as being their own original work. Rather, employees should include a footnote in their work indicating which AI tool was used and when it was used. Also, employees must be prepared to explain the rationale behind these decisions and the role AI played in them. Accountability for decisions made with the assistance of AI remains with the employee.

g. Training and Support. The organization will provide training and resources to help employees understand how to effectively and responsibly use AI tools. Use case is an important factor in determining which tools are appropriate, and employees are expected to evaluate whether a given tool fits their intended purpose. In addition to internal resources, employees are encouraged to complete relevant courses – such as those available free online – on AI ethics, data privacy, and secure usage. Employees should seek assistance from their supervisors or the ITS department if they have questions or require support regarding AI technologies.

h. Ethical Considerations. Employees must consider the ethical implications of using AI in their work. This includes assessing AI outputs to detect and avoid bias, considering whether AI outputs would have a negative impact on institutional reputation or integrity, ensuring fairness in decision-making, and being mindful of the potential impact on employees, students, stakeholders (i.e., board members, alumni, etc.), and vendors with whom the College has DRAFTcontractual relationships. Any concerns regarding ethical use should be reported to management.

i. AI Tool Vetting and Inventory. ITS maintains a website which has a list of AI tools that have been vetted for use by employees and students. AI tools used by employees and students on the college’s network or on college-provided computing machines must undergo a formal vetting process to ensure they meet established security requirements. If an employee or student has a mission-related reason to use an AI tool that is not listed on the ITS website, then the employee or student should submit a request to ITS which will assess whether the tool aligns with the college’s security requirements. Approval to use an AI tool not listed on the ITS website must be granted by the Chief Information Officer (CIO), and when appropriate, in consultation with the head of the requesting department (See NIST control ID.AM-1.) (Note: if the AI tool requires purchase, then please refer to the Software Request policy for details about the process to follow in order to procure the tool.)

j. Essential Pre-Use Considerations for New AI Tools. Responsibility for vetting AI tools does not reside solely with ITS. As AI technologies continue to evolve and expand, the campus community shares a role in ensuring responsible use. Employees are encouraged to research any AI tool (not on an approved list or encouraged for use by ITS) prior to use with attention to its purpose, data handling practices, and potential risks. The checklist below highlights key considerations to help evaluate whether an AI platform—if not already vetted by the College—meets acceptable ethical standards and safeguards, and guides users on how to responsibly approach its use:

    1. Handling of Inputted Data. Regardless of any encryption or security measures claimed by the tool, I understand that I should avoid submitting sensitive or personally identifiable information. I also understand that I should not share confidential or valuable intellectual property with the tool.
    2. Purpose and Context. I should use the platform appropriately in alignment with relevant organizational or legal requirements. I understand that not all AI tools are suited for every task, therefore, I should evaluate whether the tool aligns with the specific purpose, intended outcome, and organizational context of the task.
    3. Consideration of Others. I understand that if others (e.g., students, colleagues, etc.) are expected to use the platform, then clear communication, informed consent, and alternatives should be made available.
    4. Data Ownership and Use. The platform should clearly explain who owns the submitted content and how it may be stored, reused, or used for AI training.
    5. Privacy and Security. The platform should provide a transparent privacy policy and describe how it protects data (e.g., via its encryption, access control, and retention policies). The platform also provides users with the option to clear personalized memory from conversations with the tool.
    6. Ethical Use and Risk Awareness. The platform makes it easy to understand ethical considerations and risks, and the platform commits to responsible, safe, and fair AI use.

k. Compliance with Regulations. Employees must comply with all applicable laws and regulations governing the use of AI technologies. This includes intellectual property rights, data protection laws, and industry-specific regulations.

l. Non-Personal Use. AI tools provided by Ramapo College are for business use only and should not be used for personal use. This policy is in place to ensure the maintenance of a professional and productive environment, the preservation of institutional resources, and to prevent potential legal and security risks. Personal use of these tools could potentially involve sharing of inappropriate or sensitive content, misuse of time and resources, and potential breach of data privacy regulations.

m. Monitoring. Ramapo College reserves the right to monitor all employee interactions with AI tools for the purpose of ensuring compliance with this policy and procedure.

n. Non-compliance. Non-compliance with this policy and procedure may result in disciplinary action.

o. Exceptions. Any exceptions to this policy and procedure must be documented by RCNJ ITS with the reason for the exception, and mitigations to reduce risk associated with not fully implementing this policy and procedure. Exceptions may include, but are not limited to, legacy systems or applications that do not permit configuration to the extent required by this policy and procedure, and systems that are not under the direct control of RCNJ, for example.

p. Review and Updates. This policy and procedure will be reviewed annually and updated as necessary to reflect changes in technology, legal requirements, and organizational needs. Employees will be notified of any significant changes to the policy.

Note: The typli AI Text Generator (see https://typli.ai/ai-text-generator) was used on 1/27/2025 to generate an initial draft of this policy statement.