Artificial intelligence is altering the conditions under which we teach and learn. Large language models, in particular, have made their way into every dimension of knowledge work, including higher education. For educators (and part-time adjunct instructors like me), this shift presents both a challenge and an opportunity. How do we teach students to understand, question, and use AI tools responsibly while also learning to integrate those tools into our own workflows?
Rethinking the Educator’s Role in the Age of AI
In my recent redesign of an undergraduate technology course, I set out to explore this question not only through the content but also through the way the course itself was developed. The result was a working model for how AI can act as a responsible teaching assistant when paired with clear frameworks, ethical constraints, and critical reflection.
As someone who leads with the Learner strength from CliftonStrengths, I find deep satisfaction in absorbing new concepts, systems, and techniques, then reshaping that knowledge into formats that others can apply. I approach teaching not as a transfer of information but as the design of layered learning experiences. Every element of a course, each reading, prompt, and activity, must support students in building capacity for judgment, not just proficiency with tools.
When I set out to refactor my course Technology in the Workplace for a new academic term, I was motivated by more than content updates. I wanted the course to reflect the reality that AI is now part of how work gets done, across all professions and fields of knowledge work. For students entering a labor market where entry level roles are shrinking and automation is increasing, building AI fluency is not optional but foundational. I also wanted to model responsible use, both in how the course was structured and in how students were asked to engage with the material.
Initially, I approached AI as a timesaver. I asked AI to help me surface better resources, organize topics, and generate prompts. But I quickly realized that something was off. The more time I spent refining output and troubleshooting broken links, the less confident I felt in my own discernment. Rather than accelerating my thinking. I was diluting it. The exercise began to feel more extractive than productive. I needed a framework that would help me preserve human judgment while still partnering with the model in useful ways.
That is when I came across Anthropic’s AI Fluency course and its 4D Framework. The model (Delegation, Description, Discernment, and Diligence) offered a structure for how to think before collaborating with AI. It posed four essential questions:
- What tasks can be responsibly delegated to AI?
- How can I describe requests to maximize quality and minimize harm?
- Where should I challenge and question AI’s output?
- What requires diligence and human oversight before anything reaches students?
These were not technical questions but about how we can collaborate with AI. In this context, I adopted a pedagogical lens and built a custom GPT grounded in the 4D model to guide every step of the course redesign. Below is a closer look at how each component of the framework shaped the course.
Delegation
I used AI to offload repetitive and tactical tasks that consumed time but did not require high levels of human insight. These included curating content from trusted public sources, organizing articles and videos into weekly modules, generating drafts of weekly announcements, and offering ideas for hands-on activities.
I also used AI to create outlines for lecture recordings and bullet point summaries for each module. These remained under my final control, but the drafts reduced friction and allowed me to reinvest energy in providing feedback and guiding student work.
While I used AI to modularize weekly content and identify patterns across tools and topics, the high level architecture of the course including grading strategy, sequencing, and outcomes remained entirely human led. I structured the course to introduce ethical considerations early and scaffold both AI skills and reflection throughout.
Description
Many of the problems instructors encounter with AI stem from vague or under specified prompts. I learned to write requests with far greater precision, including the target learning outcome, expected difficulty level, accessibility needs, time investment, and source boundaries. I also embedded ethical guidelines directly into the prompt. For example, I specified that content should avoid surveillance themes, biased framings, or tools that compromise user privacy.
This level of detail reduced hallucinations, minimized broken or paywalled links, and saved time with downstream editing. While the results were never perfectly accurate, this saved time and effort as well as surfaced new content that I would not have identified through pure web search.
Each week’s assignments incorporated tools aligned with contemporary workplace expectations such as Microsoft Productivity Suite. By layering tutorials from Microsoft Learn with real world prompting exercises, students developed fluency with AI tools that align with modern professional environments.
Discernment
This was a critical dimension of the framework as I needed to understand when to trust AI responses as well as when to detect whether students provided AI responses.
Every week I offer a discussion topic for students and this is an area where I anticipated AI responses to show up from students. I needed to redesign every discussion topic to ensure that AI-generated responses would be ineffective and students applied critical thinking. I placed special focus on building friction into discussion topics to make AI misuse unproductive. Discussion prompts emphasized ethical reflection, real life tradeoffs, and creativity. Students also reflected on real world use cases weaved in with discussions about risks, unintended consequences, and governance challenges. These activities were designed to sharpen critical thinking while grounding abstract topics in lived examples and embedding responsible AI topics in the discussions. These types of responses cannot be generated without personal thought and synthesis. AI-generated responses were explicitly banned, and in some cases I embedded white-font signals in the prompts to detect copy-paste attempts.
For weekly quizzes, I took each set of questions and tested them inside ChatGPT. If the model scored above fifty percent, I revised the questions to reduce surface level predictability and increase the demand for applied reasoning. These quizzes remained open book but rewarded understanding rather than regurgitation.
Diligence
Every piece of AI-generated content was reviewed by me. I edited more than half of the AI output for accuracy, tone, accessibility, and alignment with learning objectives. I also verified all external links, examined privacy implications of recommended tools, and tracked any changes to source material. This diligence was not only about risk mitigation but also about modeling responsible AI use in practice. As students tackled AI topics in the course and used AI tools, it was critical to instill best practices in accountability of co-created content.
What I Learned
The 4D model helped me rewire my own workflows while clarifying what must remain human. It allowed me to embrace AI as a teaching assistant while preserving critical thinking as the heart of the course. Three insights stand out.
First, systems thinking is essential. Tools change quickly, but frameworks like this one help educators design courses that are modular, resilient, and grounded in values.
Second, boundaries matter. Rather than banning AI, I designed constraints and friction points that redirected students toward deeper engagement. These constraints became signals of academic integrity, not obstacles to learning.
Third, human oversight is essential. The most powerful result of this redesign was not faster content production. It was a sharper sense of what I will never delegate: instructional judgment, student relationships, and the ethical framing of learning.
Teaching with AI on My Terms
This course is more than a syllabus. It is a working prototype of what responsible, AI assisted teaching can look like driven by human judgment, shaped by intentional design, and grounded in student agency. It makes visible the decisions that often remain hidden in course design. We have the choice to not just teach about AI, but with it, on our own terms.
Educators have a responsibility to show students how to use AI without losing themselves in the process. The 4D framework helped me do that. I offer it here not as a best practice, but as a lens: one that helped me keep pedagogy human, even when the assistant is not.
Discover more from DEEPLY Product
Subscribe to get the latest posts sent to your email.