Teaching the AI-Ready Graduate: Integrating AI-related Competencies into your Course

Earlier this month, I facilitated a well-attended and highly engaging CTLI workshop titled Teaching the AI-Ready Graduate: Integrating AI-Related Competencies into Your Course. The session brought together MSU educators from across disciplines to think collectively about what it means to prepare students for a world in which artificial intelligence is already shaping academic work, professional practice, and civic life.

This post builds on that workshop conversation and offers a written companion for those who attended, as well as those who were not able to join us live. Slides from the workshop are available here: Teaching the AI-Ready Graduate: Integrating AI-Related Competencies into Your Course 

Rather than revisiting tools or debating whether AI should be used in higher education, both the workshop and this post focus on a different question:

What responsibility do we have to prepare students for a world in which AI is already embedded, regardless of whether or how we personally use it?

To explore that responsibility, we first needed to shift the conversation.

From Opinion to Preparation

Conversations about AI in teaching and learning often evoke strong reactions: enthusiasm, skepticism, concern, and frustration, sometimes all at once. Early in the workshop, we intentionally clarified what the conversation was not. This was not a debate about the morality of AI, a mandate to adopt specific tools, or a session focused on academic integrity enforcement. Instead, the focus was student preparation and graduate readiness.

This reframing matters because employer signals around AI are becoming increasingly consistent. According to Microsoft and LinkedIn’s 2024 Work Trend Index, 66% of business leaders report they would not hire someone without AI skills, and 71% say they would prefer a less experienced candidate with AI skills over a more experienced candidate without them (Microsoft & LinkedIn, 2024). Similarly, reporting on labor market analyses by Brookings and Lightcast indicates that job postings requesting AI-related skills have increased by more than 600% over the past decade, with growth extending well beyond traditionally technical roles (CBS News, 2024).

At the same time, employers are not asking for AI experts across the board. Surveys summarized by the World Economic Forum emphasize judgment, decision-making, and the ability to evaluate AI outputs responsibly as most important in future employers (World Economic Forum, 2023). That is, AI competencies rather than tool mastery are what are being prioritized. These signals suggest that AI readiness is becoming a baseline professional expectation, not a niche specialization. 

Taken together, these signals shift the question from whether AI belongs in higher education to what kind of preparation students need before they graduate.

What Does It Mean to Be an “AI-Ready” Graduate?

Being AI-ready does not mean that every graduate must master every emerging tool. Instead, it points to a set of transferable capabilities, including:

  • understanding where and how AI is used within a field,
  • applying AI appropriately in disciplinary or professional contexts, 
  • recognizing ethical, social, and professional implications, and
  • critically evaluating AI-generated outputs.

Many institutions are now framing these capabilities as graduate attributes or outcomes students develop over time across a curriculum rather than in a single course. Once we began naming these capabilities, the next question became how such competencies actually develop overtime within a curriculum. 

To support this way of thinking, the workshop introduced the Paths of Learning model, which emphasizes developmental progression rather than one-time achievement.

The Paths of Learning

  • Awareness: Recognizing that a skill or practice exists and understanding its relevance.
  • Literacy: Developing foundational understanding and applying skills with guidance.
  • Proficiency: Demonstrating independent use, evaluation, and judgment.
  • Mastery: Applying skills in real-world contexts and supporting others’ learning.

To ground this model in something familiar, we used academic research literacy as an example. 

  • Awareness: knows that academic research exists, recognizes that peer-reviewed sources differ from popular sources, understands that research is important in their field
  • Literacy: can locate scholarly sources, understands basic structures (abstracts, methods, references), can use sources appropriately with guidance
  • Proficiency: independently finds, evaluates, and integrates sources, selects appropriate evidence for a given purpose, applies disciplinary conventions with minimal support
  • Mastery: designs or leads research-informed projects, evaluates the quality and contribution of research, mentors others in research practices, adapts research practices to new contexts

Most educators would not expect students to demonstrate research mastery in an introductory course, yet we often assume similar leaps when it comes to AI. This parallel helped surface how easily expectations can become misaligned, especially when we assume proficiency or mastery without having intentionally supported earlier stages of development. 

The Paths of Learning for AI

  • Awareness: Recognizing AI systems, uses, and limitations
  • Literacy: Interpreting outputs, questioning results, identifying bias
  • Proficiency: Applying AI strategically in disciplinary contexts
  • Mastery: Evaluating, adapting, or designing AI-supported workflows

Naming AI competencies along a developmental continuum is a useful starting point, but competencies alone are not enough to guide teaching and learning. To move from broad capability to intentional practice, those competencies must be translated into learning outcomes that clarify expectations for students, instructors, and programs. This is where alignment becomes critical.

From Competencies to Learning Outcomes

Competencies describe what graduates should ultimately be capable of. Learning outcomes translate those competencies into expectations that are explicit, teachable, and assessable.

Examining AI fluency learning outcomes articulated by peer institutions e.g., Ohio State University AI Fluency Initiative, offered a concrete example of how competencies can be made visible. The goal was not replication or benchmarking, but reflection. Participants were encouraged to consider questions such as:

  • Which competency levels are emphasized?
  • Where might these outcomes live across a curriculum?
  • What feels broadly applicable versus discipline-specific?
  • What conversations would this prompt within a department or program?

These questions are particularly relevant at MSU, where AI guidance is intentionally framed through guidelines rather than policy. This approach supports flexibility and disciplinary autonomy, but it also means that alignment does not happen automatically. Shared language around learning outcomes becomes especially important in this context.

Learning Objectives, Assessment, and Alignment

A key takeaway from the workshop was that conversations about AI often stall at assessment, not because assessment is impossible, but because expectations are unclear. When learning objectives are not explicit, assessment can quickly drift toward policing tool use or evaluating AI outputs by default. Alignment helps prevent this. If an AI-related learning objective focuses on awareness or literacy, assessment might reasonably center on explanation, interpretation, or critique e.g., asking students to evaluate the limitations of an AI-generated response or reflect on when AI use would be appropriate in a given context. When learning objectives emphasize proficiency, assessment may shift toward students’ ability to apply AI strategically, justify their decisions, and independently verify outputs using disciplinary standards. In courses where AI use itself is the learning objective such as writing, data science, or professional practice, it may be entirely appropriate to assess prompt design, output quality, or iterative refinement. The key here is intentional alignment: we assess what we intentionally teach. When learning objectives, activities, and assessments are aligned, AI assessment becomes less about enforcement and more about surfacing student reasoning, judgment, and responsibility.

Once learning objectives and assessments are clarified at the course level, the remaining challenge is how those expectations fit together across courses and disciplines.

Discipline, Sequencing, and Ethical Responsibility

Another recurring theme in the workshop was that AI competencies look different across disciplines. Expectations should align with disciplinary norms, course level, and program sequencing. Uniformity is neither realistic nor desirable; intentionality is.

Without coordination, students may experience gaps, redundancies, or inequitable preparation as they move through a program. With even modest alignment, clarifying assumptions, articulating learning outcomes, or coordinating expectations across courses, faculty can reduce workload, support student learning, and create more coherent curricular pathways.

When AI-related expectations are unclear or inconsistent across courses and programs, students are left to make decisions on their own (often without shared norms, guidance, or language). In these moments, students are forced to infer what is acceptable, appropriate, or responsible. These moments of ambiguity are where ethical questions arise, not as abstract principles, but as lived learning challenges. In this sense, ethics is not a separate conversation from sequencing and alignment, but a direct consequence of how intentionally (or unintentionally) we design learning pathways.

Ethical and responsible AI use is not a disclaimer to add to a syllabus; it is a learning outcome that benefits from repeated, contextualized practice. Similarly, assessment need not focus on detecting or policing AI use.

Instead, AI ethics assessments can be aligned with learning objectives by focusing on evidence of student thinking, such as:

  • reasoning and decision-making,
  • evaluation and verification of AI outputs,
  • transparency about use and limitations, and
  • reflection on ethical and professional implications.

When ethics, assessment, and alignment are treated as elements of learning design rather than compliance, they open the door to more coherent and sustainable approaches to AI readiness.

Moving Forward Together

Preparing AI-ready graduates is not about using tools everywhere. It is about making thoughtful, coordinated choices that reflect our values, disciplines, and institutional context.

For many educators, the most meaningful next step is not a major redesign, but a conversation… with a colleague, a program director, or a department, about assumptions we may be making and how we might better support students’ development over time.

CTLI continues to support these conversations through workshops, consultations, and resources. If you are interested in exploring how AI-related competencies might fit into your course or program, we invite you to connect and continue the dialogue.

References

CBS News. (2024). AI skills are in demand as job postings surge, data shows. https://www.cbsnews.com

Microsoft & LinkedIn. (2024). 2024 Work Trend Index: AI at work is here. Now comes the hard part. https://www.microsoft.com

World Economic Forum. (2023). The future of jobs report 2023. https://www.weforum.org