Seminar: Moving Beyond 'Chatting' — How to Make AI a True Research Partner
In September at the Biomedical Optics & Optical Metrology Laboratory (KAIST), I delivered a seminar on a topic that has fundamentally reshaped my research workflow: How to turn AI from a Q&A tool into a long-term research collaborator.
Most researchers use LLMs. Few use them systematically. Even fewer use them in a way that truly scales across months of experimentation, iteration, and analysis.
This seminar introduced a structured methodology built on project files, iterative loops, retrieval workflows, and dual-AI orchestration.
1. Why Session-Based Chatting Fails in Research
LLMs are excellent at short interactions (“Ask → Answer → Done”). But research is not short. It requires consistent context, memory, and narrative stability.
Session-based workflows break down due to:
- Context Loss — Responses get severely slower as the session prolongs , and hitting the maximum length forces a summary-to-new-session transfer which incurs human and AI transfer costs (re-explaining, aligning interpretation).
- Fragmentation — When a session covers too many topics, the resulting summary is fragmented, leading to a loss of coherent narrative.
- Iteration Fatigue — The repetition of summarize → transfer → reinterpret causes accumulated omissions, distortions, and redundancies, leading to a drop in quality.
The result is predictable: answers become less precise, more redundant, and disconnected from the project’s goals.
2. The Shift to Project-Based Collaboration
To solve this, my workflow moved from chats to structured project files. Memory lives in documents, not in the chat window.
The foundation is built on three files:
The AI-Ready Project Trio: README.md, index.md, guidelines.md
2.1 README.md: Project Overview
A human-friendly entry point that defines:
- Purpose and long-term goals
- Project phases
- High-level structure
- Links to
index.md
It acts as the front door, ensuring the “big picture” never gets lost.
2.2 index.md: The Navigation Map (Your Manual Retriever)
This is the project’s central hub and the file the AI should always read first. The principle is: index.md is the central hub for context & navigation.
Includes:
- Short project goal
- Current phase
- Condensed summaries
- Links to all major notes and logs
- Open problems and next steps
In effect, by maintaining this index, you are manually doing what an automated RAG retriever would do: you are the retriever. It’s a library with a catalog, not books scattered on the floor.
2.3 guidelines.md: The Operating System
This file defines how the AI should behave. It functions like a personal “constitution” for your project.
Includes:
- Summarization rules
- Reasoning structure
- File naming conventions
- Behavioral constraints
- Interaction protocol
A strong guidelines file works like lightweight fine-tuning without any training.
How These Files Work Together
| File | Role | Purpose |
|---|---|---|
| README.md | Overview | Defines goals and structure |
| index.md | Map | Provides context and continuity |
| guidelines.md | Rules | Defines consistent AI behavior |
Together, they shift the workflow from:
Session-Based Q&A → Project-Based Collaboration
3. A Forensic Methodology for Research
My workflow adopts a forensic-style structure:
- Collect Evidence — logs, errors, papers
- Build a Timeline — sequence events
- Detect Patterns — correlations, anomalies
- Interpret — hypotheses and next steps
This approach transforms ambiguous problems into analyzable systems.
4. Case Study: CoCoA for OCT
I presented my ongoing work adapting a CoCoA-like AO framework for OCT. A dual-AI strategy proved essential:
-
ChatGPT = The Organizer (file structure, Python templates, documentation)
-
Claude = The Synthesizer (physics reasoning, cross-paper synthesis, conceptual gaps)
Together, they form a hybrid system stronger than either model alone.
5. Iterative Analysis: Small Steps, Tight Loops
Large questions create vague answers. The best workflow is:
- Ask a focused question
- Summarize
- Store
- Move to the next step
Each loop increases accuracy and cumulative understanding.
Small, fast cycles (Ask → Summarize → Store → Refine → Ask) outperform “one big question”.
Measurable Impact: Moving to this approach drastically reduces rework and transfer cost while simultaneously increasing insight density and accuracy.
6. The Scaling Problem, and Why RAG Matters
As project files grow, even structured summaries are not enough. Models begin to:
- miss older information
- overemphasize recent files
- produce diluted answers
RAG solves this by retrieving only the relevant parts of your archive before answering.
Two modes:
- Manual RAG — precise, human-curated (good now)
- Automated RAG — vector DB-driven (necessary later)
The principle:
Give the model less, but give it the right parts.
7. Practical Tools for This Workflow
- VS Code for clean Markdown editing
- GitHub Pages for structured documentation
- Version control for reproducibility and narrative stability
These tools enable scalable, maintainable AI-assisted research.
8. The Future: Personal AI Research Ecosystems
As workflows mature, researchers can expand into:
- APIs
- vector DBs
- retrievers and rerankers
- automated pipelines
- semi-agentic systems
Creating a private, domain-optimized AI research assistant.
Conclusion — From Asking to Co-Researching
The seminar’s central message:
AI becomes a research partner only when given structure.
Not through clever prompts, but through:
- project files,
- iterative loops,
- retrieval workflows,
- and explicit methodology.
With the right architecture, AI stops functioning as a tool and begins operating as a colleague—consistent, scalable, and deeply integrated into your research lifecycle.
📄 Download Seminar Slides AI as Research Partner — Custom Instructions, Knowledge Management, and Iterative Analysis