Syracuse University provides several AI platforms for use by students, faculty, staff, and other authorized users. All of them are University information systems, and all of them are governed by the same institutional policies. This page explains how your data is handled and what protections are in place.
All University AI Platforms
Are my interactions logged?
As with all Syracuse University information systems, interactions with AI platforms are logged as part of standard system operations. This is the same principle that applies to email, learning management systems, and every other campus technology resource. It is governed by the University's Information Technology Resources Acceptable Use Policy and the Information Security Framework.
Who can access data stored on these systems?
As with all computing systems, data stored within Syracuse University information systems is accessible to authorized IT staff in order to support, secure, maintain, upgrade, troubleshoot, and back-up those systems. This access is strictly governed by the Acceptable Use Policy and the Information Security Framework, and is exercised in accordance with best practices and departmental guidelines.
Technology partners who provide services to the University under contractual data protection agreements may also have access as part of system operations.
Is my data sold?
Data from University AI platforms is not sold.
Is my data used to train AI models?
The AI platforms licensed by Syracuse University through the ITS business office — including Claude Enterprise, Microsoft Copilot, Google Gemini, and ChatGPT Teams — are contracted with provisions that prohibit using your data to train their AI models. Clementine interaction data is likewise not used to train or fine-tune any underlying language models.
These protections apply when you are logged in with your Syracuse University NetID. That is the distinction that activates the University's contractual terms. If you use a personal subscription or a free tier of any AI tool, the University has no input on those agreements.
For more on which tools are approved for University data and the importance of logging in with your NetID, see the AI Guidelines.
What policies govern Syracuse University's AI information systems?
All University AI platforms are information systems subject to the same policies as any other campus technology:
- The Information Technology Resources Acceptable Use Policy governs how IT resources may be used and the University's rights with respect to accessing data stored on those systems.
- The AI Guidelines outline which tools are approved for University data, the importance of logging in with your NetID, and protections around confidential information.
- The Vendor AI Policy governs how technology vendors may and may not use institutional data.
AI is a new technology, but the policy framework is not. The same principles that protect your data across every other University system apply here.
Clementine Assistants — What's Different
What makes Clementine different from Claude Enterprise, Copilot, Gemini, or ChatGPT Teams?
Claude Enterprise, Microsoft Copilot, Google Gemini, and ChatGPT Teams are general-purpose AI tools. Your conversations on those platforms are between you and the tool.
Clementine is different. It hosts purpose-built assistants — like the Class Search assistant — that are created and maintained by specific University teams. Each assistant has a defined scope, specific data sources, and a trustee (the University employee or team who built it).
When you use an assistant on Clementine, your interactions are accessible to that assistant's trustee(s). This is much like submitting a question through any other University service where a person on the other end reviews what comes in to make the service better.
Why does the assistant's trustee review interactions?
Because the quality of the response is the service. The trustee(s) review interactions to improve the assistant for all users. Specifically, reviews help to:
- Improve accuracy — Identify responses where the assistant gave incorrect, incomplete, or misleading information so the underlying data or instructions can be corrected.
- Close gaps — Discover questions the assistant should be able to answer but can't yet, so the right data and capabilities can be added.
- Improve clarity — Find situations where the assistant had the right information but communicated it in a confusing or unhelpful way.
- Ensure quality — Verify that the assistant stays within its intended scope and provides a reliable experience.
Your interactions help make the assistant more useful, more accurate, and more helpful for everyone who uses it.
How does the review process work?
Periodically, the assistant's trustee(s) review interactions against the data the assistant is supposed to use and the instructions it's supposed to follow. When issues are found — a wrong answer, a confusing response, or a question the assistant should have handled but couldn't — the team traces it to the root cause and makes targeted improvements.
The team also looks at patterns in user experience: situations where someone had to rephrase a question multiple times, or where the assistant gave a technically correct answer that wasn't practically helpful. The result is a tighter, more reliable assistant that gets better over time because of how people actually use it.
Does the thumbs up/down feedback I leave get reviewed?
Ratings are one of the most valuable signals available to the team. When you rate a response, that feedback is paired with the interaction during reviews. A thumbs-down tells the team something didn't work even if the answer looked correct on paper, and a thumbs-up helps identify what "good" looks like so it can be replicated. Your ratings directly shape which improvements get prioritized.
| Data Practice | Claude Enterprise, Copilot, Gemini, ChatGPT Teams | Clementine Assistants |
|---|---|---|
| Interactions logged per University policy | Yes | Yes |
| Interactions accessible to authorized IT staff | Yes | Yes |
| Interactions accessible to an assistant's trustee/creator | Not applicable — no individual trustee | Yes — the trustee(s) who built the assistant |
| Interactions reviewed to improve response quality | Individual conversations are not reviewed | Yes — reviewed by the assistant's trustee(s) |
| User feedback (ratings) reviewed | Not applicable | Yes — ratings inform improvement priorities |
| User data sold | No | No |
| User data used to train AI models | No — prohibited under University contracts | No — used to improve instructions and data only |
| Governed by SU Acceptable Use Policy | Yes | Yes |