"The concrete includes the abstract and exceeds it in value." Nancy Frankenberry wrote this about Jonathan Z. Smith's cartography of religion, but it applies equally well to my undergraduate religious studies course-turned-venture-studio. The concrete act of building a company must exceed the abstract theory of entrepreneurship. The religious act cannot be mistaken for the study of the religious act. When the balance reverses, the map eats the territory. This is what happened in my classroom.
The Assignment That Seemed Clever
At the start of the semester, I gave my students what I thought was an elegant pedagogical hack. They would audit their own LinkedIn profiles and resumes using an LLM, identifying three "hiring stoppers": specific gaps that recruiters cited as blocking their candidacy for target roles. Then, working in cross-functional teams, they would reverse-engineer a venture concept that, through the simple act of building it, would generate the bullet points they lacked. A finance student needing "evidence of commercial ownership" might co-found a revenue-generating service. An engineering student lacking "shipped artifacts" might own the technical build. The venture would be real enough to sell, small enough to ship by May, and theoretically engaged enough to satisfy the course's "academic" requirements around projection theories of religion. I gave them a detailed prompt template, access to LLMs, and a single class session to generate their first concept statements. The results would tell me whether they understood the syllabus and whether I had designed something that could actually work.
The Confidence Trap
What returned was, on first scan, impressive. Six teams produced concept statements with "Aha Moments," metaphysical throughlines citing Feuerbach, and evidence trails mapping each hiring stopper to specific product features. But as I read more closely, a pattern emerged that I hadn't anticipated: every single venture was a platform for documenting the kind of work the students were currently doing. Group 1 proposed "Judgment Ledger," where students logged business decisions to prove their judgment. Group 5 offered "Veritas Trail," a "dashcam for your brain" that captured decision trails. Group 6 created "Reality Ledger," measuring how AI affects perception of truth. The ventures weren't pointing outward at strangers with problems; they were curved mirrors reflecting the assignment back at itself.
The students, or the LLMs, or all of us, had followed my prompt so literally that they had built a recursive loop where the solution to "I need resume lines" was "a product that generates resume lines." I had accidentally designed an assignment that trained them to confuse the map for the territory, and the LLM, eager to please and pattern-matching on startup discourse, had enthusiastically abetted the confusion.
The prompt asked students to reverse-engineer a venture from their resume gaps. But something strange happened: the "echo chamber" effect that plagues AI-augmented learning. When students used LLMs to brainstorm, the models (trained on startup discourse about "solving problems") mirrored back exactly what the students were already doing in class. The result? Six variations of "a platform to help students prove they have skills," ventures that were technically about credentialing but were actually just the assignment itself, staring back at them.
I have started calling this tendency the Applied AI Classroom Parallax View: when the tool meant to expand imagination instead collapses it into recursive self-reference. The students ended up ideating themselves to such an extent that they built mirrors. Judgment Ledger, Veritas Trail, Reality Ledger: all systems for documenting judgment that were, themselves, exercises in documenting judgment. The syllabus became the product. The hiring stopper became the solution. The map ate the territory.
Ironically, or poetically, Feuerbach's projection theory came to life. In the parallax view, the assignment seemed straightforward: reverse-engineer a venture from your resume gaps. The students were given a prompt, an LLM, and a deadline. What emerged was not six ventures, but six mirrors, each reflecting the assignment back at itself with increasing fidelity.
The Echo Chamber Effect
When Group 1 asked GPT-4 to "generate a venture concept that addresses gaps in the students' resumes," the model, trained on thousands of startup pitch decks, did what it was designed to do: it found the pattern. The pattern was that students struggle to prove their skills because they do not yet have experience. So it generated "Judgment Ledger," a platform for students to document their decision-making. The students didn't notice that the "customer" was themselves in three months. The LLM was suggesting they build a tool for the problem of "needing to build a tool."
Group 5 took this further. "Veritas Trail" was a "dashcam for your brain while you work," a meta-tool so recursive it threatened infinite regress, turtles in an infinite loop. If you used Veritas Trail to document your work on Veritas Trail, did you need a Veritas Trail for your Veritas Trail? The LLM, asked to solve "no evidence of end-to-end ownership," proposed a product that was only evidence of end-to-end ownership. The map had eaten the territory. Lol.
The Mirroring Problem
The syllabus required ventures to engage metaphysics: truth, authority, meaning. The students, forced by the prompt to look inward, found these categories in their own academic experience. Group 3's "ClearGround" treated dental consent as a "truth ledger" because they had just read about "inspectable truth" in the prompt. Group 6's "Reality Ledger" measured "how AI reshapes perceptions of truth," which was, conveniently, exactly what they were doing in class.
The exercise did not do what I had hoped: expand their imaginations. It collapsed them. It curtailed them. When asked for "metaphysics," it returned academic theology. When asked for "venture," it returned ed-tech. What I hoped would turn into a list of possible ventures for them to start became a list of theoretical frameworks for why ventures are hard. Lol.
The Parallax View
Slavoj Žižek defines parallax as "the apparent displacement of an object (the shift of its position against a background), caused by a change in observational position that provides a new line of sight." The philosophical twist is that the observed difference is not simply "subjective," due to the fact that the same object which exists "out there" is seen from two different stances. Rather, subject and object are inherently "mediated," so that an "epistemological" shift in the subject's point of view always reflects an "ontological" shift in the object itself.
In the Applied AI classroom, parallax became the failure to recognize that the object and the background were the same. The students stood at the intersection of two interpretive systems: the academic (what they were learning) and the entrepreneurial (what they were building). But the LLM, trained on text where these coordinates are often conflated, kept returning them to the intersection itself. My head is still spinning, and I love it.
When Group 2 proposed "Tactile Narrative Systems," haptic devices for invisible phenomena, they nearly escaped. The solar eclipse example was concrete, physical, not about credentialing. But even here, the "aha moment" revealed the parallax: "The real product wasn't the hardware. It was the translation layer between raw data and human meaning." They had built a metaphor for their own assignment. Or maybe they had built an assignment for the metaphor we are all experiencing when we interface with LLMs.
Why It Happened
The prompt was designed to make the personal universal: your gaps become the world's needs. But LLMs are symmetry machines. They find the shortest path between input and output, and the shortest path from "my resume is weak" is "build a tool that fixes resumes." The students, working quickly, accepted the first plausible output. They didn't iterate because the initial output felt right. It addressed the rubric, cited the readings, satisfied the constraints. And, why I am even writing this: we are all new at learning with AI. We do not know what to trust, or how to build trustworthy teaching protocols. We are learning as we go, as we grow.
What they missed was the indexicality of the venture: a real business points outward, to strangers with problems. Their ventures pointed inward, to students with rubrics. The "customer" was always another version of themselves. I could add 60 pages here about why the study of religion helps navigate these challenges with insiders and outsiders and all that. But I won't.
The Correction
Resume lines write themselves when the work is for someone else. My students (and I) are now back on track and next up is finalizing a product/service idea and doing customer research. The fix was hermeneutical: we are solving a customer's problem, not our own. This is the anti-parallax: the venture and the learning are perpendicular, not parallel. The student learns by building; the customer benefits by using. Neither is a mirror of the other.
The Lesson
AI-assisted education risks this collapse whenever the prompt allows the student's situation to become the content. As many of us build out "How" to teach in the age of AI, one solution may involve constraining the coordinate system: build for strangers, in physical space, with money changing hands.
What would this look like in a non-entrepreneurial class? In a literature seminar, instead of asking students to "analyze the themes of this novel using AI," you might ask them to "generate a reading guide for a specific type of reader who is not yourself: a high school teacher in rural Montana, a prison book club, a translation app developer." The analysis still happens, but it points outward. In a biology lab, instead of "use AI to explain this cellular process," you might ask students to "create a troubleshooting guide for a community health worker in a region with intermittent electricity." The knowledge still gets demonstrated, but it must survive contact with constraints that are not academic.
Žižek describes the parallax gap as "the confrontation of two closely linked perspectives between which no neutral common ground is possible." This is what my students encountered: the academic perspective and the entrepreneurial perspective, linked by the shared vocabulary of "venture" but separated by irreconcilable demands. The academic wants reflection; the entrepreneur wants traction. The LLM, trained to please, offered a synthesis that was actually a collapse.
The parallax view is seductive because it feels like integration. But it is also disorienting because it is confusing and collapsing POV with priority and expectation. Real integration requires friction, the resistance of the world pushing back. Learning with AI is hard to build, hard to sell, and hard to explain. That difficulty is the point. It is the difference between a mirror and a window.
I am here for it. So are my students.