Skip to content
Go back

Ask Anything, Get Better: A Friendly ChatGPT Workflow for WGU's MBA

Edit page

When I started WGU’s MBA in IT Management in March, I had two main goals: do the real intellectual work and use AI to make that work sharper. ChatGPT became my judgment-free study assistant, asking all the “dumb” questions, pressure-testing my logic, and turn rubrics into progress. Not a shortcut to grades; a shortcut to better thinking.

Working thesis: AI doesn’t reduce effort; it reallocates it toward higher-value cognition—synthesis, critique, and decision-making. This post shows the exact workflow I used.

Why a ChatGPT Study Assistant (not a shortcut)

MBA programs deliver oceans of material. What students actually need is structure and feedback. ChatGPT gave me both: fast feedback loops and clean ways to turn rubrics into study plans, quizzes, and drafts I could iterate on. It’s less “do it for me” and more “coach me through it.”

My Tools & Setup

Try this: Create a folder per course with the same sub-structure every time:


01_Rubric/Criteria.md
02_Notes.md
03_Drafts.md
04_Sources.md
05_Flashcards.csv

Consistency beats willpower.

Workflow: From Chaos to Clarity

At first, rubrics, PDFs, books, videos, and lecture notes felt overwhelming. Here’s the repeatable path I used.

1) Ingest & Label

Paste the rubric into Canvas and ask ChatGPT to extract pass criteria into a checklist.

Prompt shape:


Convert this rubric into a deliverable plan:

* Headings I must include (1:1 with rubric)
* “Definition of done” (1–2 sentences) per heading
* Checkbox list of acceptable evidence per criterion
  Output: Markdown

2) Plan the Evidence

For each rubric line, list the specific evidence you’ll produce (citations, data points, frameworks). This prevents last-minute scavenger hunts.

3) Draft in Layers

4) Track Everything

Use consistent titles in ChatGPT Projects and matching filenames in Drive so search works with your brain, not against it.

Try this (rubric → deliverable plan):


Convert this rubric into:

* Required headings
* A 1–2 sentence “definition of done” per heading
* A checkbox list of evidence for each criterion

From Passive Reading to Active Dialogue

I stopped reading at the material and started talking with it. ChatGPT became a Socratic partner that never gets bored and never judges the question.

This kept me alert, honest, and far away from zombie-reading.

Try this (recall → application → evaluation):


I will paste notes. Create a 3-round quiz:

* Round 1: recall (terms, definitions)
* Round 2: application (short scenarios)
* Round 3: evaluation (tradeoffs, risks, counterarguments)
  After each answer, ask me to improve it in 1 sentence.

AI as a Mirror: Grader Mode (Coach, not Ghostwriter)

The most underrated move: let AI evaluate your draft.

  1. Paste your draft and the rubric.
  2. Ask ChatGPT to role-play a strict evaluator who highlights gaps, not stylistic nitpicks.
  3. Close the gaps yourself.

Guardrail: I never ask AI to write the final submission. I ask it to assess my work against the rubric, identify missing evidence, and flag weak logic. The writing stays mine.

Try this (evaluator mode):


Role-play a strict evaluator using the rubric below.

* Score each criterion (0–4) and justify the score.
* Identify missing evidence and weak logic.
* Suggest the minimum change to raise the score by one level.
  Do NOT rewrite my paragraphs; point to what must change.

Accountability: My 5 Ethical Rules

  1. No ghostwriting. AI can brainstorm, structure, and critique; I do the final writing.
  2. Cite sources I actually read. No AI-invented citations.
  3. Show your work. Keep outlines, drafts, feedback cycles.
  4. Use AI for effort allocation, not avoidance.
  5. Be audit-ready. If asked, I can demonstrate the learning process end-to-end.

Reusable Prompt Snippets (Copy, Paste, Adapt)

1) Rubric → Outline Map


You are my study architect. Convert this rubric into an outline that is isomorphic to its criteria.
For each rubric item:

* Create a section heading.
* Add a “definition of done” (1–2 sentences).
* List 3–5 pieces of acceptable evidence (data, examples, citations, frameworks).
  Output: Markdown.

2) Read → Recall Drill


I will paste notes. Create a 3-round quiz:

* Round 1: simple recall (terms, definitions)
* Round 2: application (short scenarios)
* Round 3: evaluation (tradeoffs, risks, counterarguments)
  After each answer, ask me to improve it in 1 sentence.

3) Evaluator (Grader Mode)


Role-play a strict evaluator using the rubric below.

* Score each criterion (0–4) and justify the score.
* Identify missing evidence and weak logic.
* Suggest the minimum change that would raise the score by one level.
  Do NOT rewrite my paragraphs; point me to what must change.

4) Flashcard Generator


From these notes, generate CSV flashcards with columns:
Term, Definition, Example, Confuser (a commonly confused cousin), Source/Page.
Keep each definition ≤ 30 words.

5) Integrity Check


Given this draft and this rubric, list any claims that require citations and any statements that look like speculation.
Suggest where primary sources would strengthen the argument.

Takeaways & Next Steps

If you’re starting your next WGU course, begin with three moves—Rubric → Outline, Read → Recall, Grader Mode. Ten minutes of setup can save hours and raise the ceiling on your work.

Think of AI as a climbing buddy: it shines a light on loose rocks and better routes, tells a terrible joke at altitude, and makes the ascent safer. You still summit under your own power—just with fewer wrong turns.



Edit page
Share this post on:

Previous Post
Sora 2: The GPT-3.5 Moment for Video