Build the Assistants You Wish You Already Had
A practical workshop for professors to build teaching, research, and administrative assistants in ChatGPT using your materials and professional judgment.

Supports course design, feedback, and instructional clarity.
Lesson outline drafts
Rubric language support
Feedback comment bank
Quiz question generator
Study guide builder
Example and analogy ideas
You approve. You edit. You decide.

Supports thinking, drafting, and analytical clarity.
Argument structure checks
Outline from notes
Counterargument prompts
Clarity rewrite options
Thematic synthesis help
Abstract and title drafts
It supports your thinking. Not replace it.

Supports routine service and operational work.
Email template drafts
Committee memo outlines
Meeting notes summaries
Policy text translation
Report structure support
Reusable form language
Reduces repetition. Preserves judgment.
Built on Faculty-First Principles
Framework Focused , Tools Change
This is why we created the TIME Framework
Work first, AI second (or not at all)
Minimum Effective Use
Human-in-the-loop always
Context + Policy Matter
Goal: more time to be human - not more work.


No.
This workshop is designed for faculty with little or no prior experience using AI tools. No coding or technical background is required.
The focus is not on mastering a platform. It is on building structured, repeatable workflows aligned to your teaching, research, and service responsibilities.
If you can write an email or outline a syllabus, you can participate.
By the end of the workshop, you will have built:
A structured Teaching Assistant aligned to one of your courses
A Research Assistant configured to support your writing or analysis
An Administrative Assistant to reduce repetitive service work
A clear decision framework for when and how to use AI responsibly
These supervised digital assistants grounded in your materials, standards, and professional judgment.
The outcome is reduced cognitive load and time recovered — not automation.
No.
This workshop does not rely on autonomous AI agents.
Faculty work is episodic, judgment-heavy, and ethically constrained. Automating decisions in teaching, research, or service introduces unnecessary risk and complexity.
Instead, this program is built around a human-in-the-loop model. You remain accountable. AI supports drafting, structuring, and organizing — it does not operate independently.
The goal is clarity and efficiency, not delegation of professional responsibility.
No.
The workshop is built on the principle that AI is a capable but unreliable assistant. It requires supervision.
You will learn how to:
Maintain intellectual rigor
Preserve disciplinary standards
Apply ethical and policy-aware boundaries
Use AI only where it meaningfully reduces effort
The aim is not to produce more work. It is to reduce unnecessary cognitive friction while preserving quality.
A paid version is recommended but not required.
Some features — such as Projects and custom GPT configuration — are more reliable or only available with a paid plan.
Participants using the free version can still engage with the core principles and build functional workflows, though some features may be limited.
The workshop assumes a single, consistent environment to reduce complexity and experimentation fatigue.
Most faculty work is text-based: course materials, feedback, manuscripts, reports, and administrative documents.
ChatGPT provides a stable environment that supports teaching, research, and service within one system.
The emphasis is on learning a framework that outlasts any individual tool. Tools will change. Structured thinking does not.
That decision should be guided by your institutional policies and personal comfort level.
Participants may use either a personal or university-provided account, if available.
The workshop includes guidance on managing risk responsibly. This includes avoiding sensitive or protected data and applying the same discretion used with email, cloud storage, and other academic technologies.
You remain responsible for aligning your use with institutional expectations.
Time commitment varies based on how you choose to engage.
The workshop includes:
A rapid setup pathway for faculty who want immediate implementation
A guided setup pathway for those who prefer a more deliberate, step-by-step approach
Both lead to the same outcome: a structured assistant aligned to your real responsibilities.
The goal is to recover 5–8 hours per week over time — not to add more tasks to your schedule.
Most faculty experimentation with AI is prompt-based and session-based.
That approach has limitations:
Context is not consistently embedded
Prior decisions and standards are not preserved
Outputs vary significantly from session to session
You end up rebuilding instructions repeatedly
Over time, this increases cognitive load rather than reducing it.
This workshop focuses on building structured, context-anchored systems. Your assistants are configured around specific courses, research projects, or service responsibilities.
Instead of re-explaining your standards each time, you develop an organized environment that:
Preserves context
Maintains consistency
Reduces repetition
Improves reliability
The difference is not better prompts.
It is better structure.
Build structured digital assistants for teaching, research, and service — without lowering standards or compromising integrity.
Copyright 2026. . All rights reserved.