Critical Reading Mode: Teaching the AI to Question Itself
Critical Reading Mode adds a second layer of AI analysis to Teaching and Scholarly reports — annotating them with scholarly critique, counter-perspectives, and questions you wouldn't think to ask.
I added a new feature to the Anselm Project that I'm calling Critical Reading Mode. It's available now for Teaching and Scholarly reports.
The basic idea: after you generate a report, you can run a second layer of AI analysis that annotates the original report with scholarly critique, counter-perspectives, and questions you wouldn't think to ask on your own.
It costs 3 credits and the result gets cached permanently on your report.
Why This Exists
When you generate a Teaching or Scholarly report through the Anselm Project, you get comprehensive analysis. The AI examines the passage from multiple angles - literary structure, historical context, theological themes, exegetical details.
But those reports have a perspective. They make interpretive choices. They prioritize certain methods over others.
Critical Reading Mode adds a second voice. It doesn't replace the original report. It annotates it. The AI reads what the first AI wrote and asks: What's missing here? What would a scholar challenge? What assumptions are buried in this analysis?
The annotations appear as teal-bordered panels inserted directly below each subsection of your report. You can toggle them on or off with a button at the top.
How It Works
The process runs in two phases.
First, the AI reads through all the section headings in your report and builds a coverage plan. It decides how many annotations each subsection should get (anywhere from one to four) and which scholarly themes each section should cover. It also marks which themes to avoid in specific sections to prevent repetition across the entire report.
This planning phase uses GPT-5-mini and runs once per report. It's coordinating the entire annotation strategy before any actual annotations get written.
Second, the AI generates the actual annotations. For each subsection in your report, it writes commentary based on the coverage plan. This phase runs concurrently - up to 50 subsections at a time - to keep generation fast.
The whole thing takes about 30-60 seconds depending on report length.
What Gets Annotated
Critical Reading Mode is only available for two report types: Teaching and Scholarly. Devotional reports don't qualify because they're not structured for academic analysis.
Within those two types, certain sections get skipped. Teaching reports skip sections like Structural Analysis, Big Idea, and Sermon Outline - eleven sections total. Scholarly reports skip five sections including Original Language and Current Debates.
Why skip sections? Because some sections are purely functional (sermon outlines, application points) and don't benefit from scholarly critique. The AI focuses on exegetical and theological sections where counter-perspectives actually matter.
Very short subsections also get skipped. If a subsection is fewer than 80 characters, there's not enough substance to annotate.
Annotation Quality
I built specific constraints into the prompts to keep the annotations useful.
The AI can't start annotations by recapping what the subsection already says. It has to begin with an insight or a question. It has to quote actual words from the text - concrete evidence, not vague references.
It can't name the scholarly method being applied. No "A form critic would note..." or "From a canonical perspective..." Just the insight itself.
It can't use generic academic throat-clearing. No "scholars debate" or "one might argue." Plain language. Direct statements.
The goal is to teach something the reader wouldn't notice on their own. If the annotation doesn't add value beyond what's already in the report, it's not doing its job.
Technical Implementation
The annotations get saved as JSON to S3. The Report model tracks the status and the storage key.
On the frontend, the page polls every three seconds while generation is running. When it finishes, the JavaScript fetches the annotation JSON and matches each annotation to the corresponding subsection heading in the DOM. It inserts teal-bordered panels below each subsection.
The toggle button at the top shows or hides all annotations at once. That state persists in localStorage, so if you reload the page, the annotations stay visible or hidden based on your last choice.
What This Costs
Critical Reading Mode costs 3 credits per report. That's three times the cost of generating the original report.
Why three credits? Because it's running dozens of AI calls - one for the coverage plan, then one per subsection for annotation generation. The total token usage is comparable to generating three standard reports.
But once you run it, the result is cached permanently. You can toggle the annotations on and off as many times as you want without additional cost.
Why I Built This
The Anselm Project generates solid biblical analysis. The reports are comprehensive. But I wanted a way to surface the methodological choices being made under the hood.
When the AI says "this passage emphasizes X," what's the alternative interpretation it's not mentioning? When it applies a certain hermeneutical framework, what would a different framework reveal?
Critical Reading Mode makes those questions explicit. It doesn't give you a different report. It gives you a conversation between two AI voices - one doing exegesis, the other critiquing that exegesis.
I've tested this on multiple reports from the Anselm Project Bible and the results are useful. The annotations catch things like overconfident claims, unstated assumptions, and places where the original report could acknowledge scholarly disagreement.
If you want to see it in action, generate a Teaching or Scholarly report and run Critical Reading Mode on it. The button appears at the top of your report viewer. Click it, confirm the 3-credit cost, and wait about a minute.
The annotations will appear automatically when generation finishes.
God bless, everyone.
Related Articles
The Biblical Lexicon Is Live
The Anselm Project now includes a free, browsable lexicon of all 27,730 Hebrew and Greek words in th...
The Anselm Project Is Coming to iOS
The Anselm Project iOS app brings the full Bible reader, all seven AI report types, collections, and...
Introducing Apologetics Reports
The Anselm Project now offers apologetics reports: nine schools of thought analyze your question ind...