Polk Library

AI in Scholarly Research

AI can accelerate discovery, synthesis, and draftingβ€”but it does not replace scholarly judgment. The Archaeological Dig Model illustrates how AI supports research while preserving human interpretation and authorship.

A framework for AI-integrated research

The Archaeological Dig Model of AI-Integrated Research defines the research process in an AI-mediated environment, organized into three phases—discovery, verification, and synthesis.

 

It establishes how AI can be used in ways that preserve human judgment, evidence integrity, and scholarly authorship. At each stage, scholars must make informed decisions about AI use, evidence, and authority.

 

While tools will continue to evolve, the foundational principles of research remain constant—even as AI reshapes when and how key decisions are made.

 

Extending broader AI literacy frameworks that emphasize technical knowledge, ethical awareness, and critical evaluation (Lo, 2025), this model specifies how those competencies are enacted through decision-making across the research lifecycle.

 

These decisions shape how research unfolds across each phase of the process and provide a foundation for instruction, evaluation, and responsible AI use.

Some tools listed below are not institutionally provided or fully open access. They may be explored individually using free or freemium tiers with optional paid plans. Use of these tools is optional and not required by the institution. Access models are compared in the Q&A section.

 

DISCOVERY
 
VERIFICATION
 
SYNTHESIS
 
Seven circular icons illustrating the Archaeological Dig Model of research: surveying, excavation, mapping, authentication, library authority, synthesis, and scholar judgment.
Surveying the Site
Excavating & Sorting
Mapping the Landscape
Authentication & Context
Library Authority
Synthesis & Interpretation
Scholar as Final Authority

 

The Archaeological Dig Model of AI-Integrated Research

A decision-based framework for AI-integrated research

 
At each stage of this model, research is shaped by key decisions about AI use, evidence, and authority.

How Scholarly Decisions Shape AI-Integrated Research

At each stage of AI-integrated research, scholars make decisions that shape how information is generated, evaluated, and used. These decisions are not tied to specific tools, but reflect enduring aspects of scholarly practice in an AI-mediated environment. The following decisions were synthesized from patterns identified across recent research on AI use in academic contexts. 

 

Key idea: AI can support research at every stage, but research quality ultimately depends on how these decisions are made.

 

Discovery
Defining direction and shaping research questions in an AI-mediated environment

AI Engagement

When and why should I use AI?

What research suggests: AI is most effective when used to support discrete stages of thinking, such as brainstorming, clarification, or language refinement. However, early or unstructured reliance can shape or constrain the development of original ideas by introducing externally generated patterns before independent thinking is established.

 

Implication for research practice: Researchers must determine whether AI is supporting their thinking or prematurely structuring it.

Guidance for practice:

  • Use AI as a targeted tool, not a substitute for initial idea formation.
  • Apply AI to bounded tasks, such as clarification, refinement, or language polishing after your own direction is established.
  • Avoid generating substantive content before developing your own ideas and argument structure (Tekir, 2026; Pryma et al., 2025).

Problem Framing

How is AI shaping my understanding of the topic?

What research suggests: AI-generated explanations can broaden initial understanding but often function as early interpretive frames, shaping what appears relevant, plausible, or complete. This can lead to conventional or formulaic research questions if not critically examined.

Implication for research practice: Researchers must remain aware of how AI may pre-structure the conceptual boundaries of a topic.

Guidance for practice:

  • Begin with your own exploratory thinking before consulting AI-generated summaries.
  • Actively question and cross-reference AI explanations rather than accepting them at face value.
  • Monitor whether your framing reflects your inquiry or reproduces common AI-generated patterns (Lo, 2026; Shen & Chen, 2025; Pryma et al., 2025).

Relevance & Positioning

What information is worth keeping, and how does it fit?

What research suggests: Effective use of AI depends on a researcher’s ability to evaluate and filter outputs. Without strong evaluative judgment, users are more likely to accept plausible but irrelevant or flawed information, especially when AI outputs appear to align with their topic.

Implication for research practice: Researchers must determine not only whether information is useful, but how it contributes to their developing argument and disciplinary context.

Guidance for practice:

  • Filter AI-generated content through your disciplinary knowledge and research goals.
  • Evaluate whether information advances your argument rather than simply fitting the topic.
  • Be cautious of accepting outputs based on fluency, coherence, or apparent relevance alone (Kim et al., 2025; Thong et al., 2025; Urban et al., 2025; Pryma et al., 2025).
Verification
Evaluating the accuracy, credibility, and validity of information

Accuracy

Is this information correct?

What research suggests: AI systems can produce hallucinations and confidently stated inaccuracies, especially in complex or specialized areas. Without active verification, these errors can enter research unnoticed.

Implication for research practice: Researchers remain responsible for verifying factual claims, no matter how authoritative AI outputs appear.

Guidance for practice:

  • Cross-check AI-generated claims against credible scholarly and primary sources.
  • Prioritize verification for data, definitions, methods, and interpretations.
  • Treat fluency and confidence as insufficient indicators of accuracy (Thandla et al., 2024; Urban et al., 2025).

Authority and Credibility

Is this a reliable and appropriate source?

What research suggests: AI outputs must be evaluated against established scholarly sources, disciplinary standards, and library databases. Credibility depends not only on whether information sounds plausible, but whether it can be supported by reliable sources.

Implication for research practice: Researchers must distinguish between information that is convincing and information that is credible within a scholarly context.

Guidance for practice:

  • Validate AI outputs against peer-reviewed literature and library databases.
  • Confirm that sources are appropriate for your research purpose and audience.
  • Use scholarly sources as the standard for credibility, not AI-generated summaries alone (Pinninti, 2025; Hapsari and Rizky, 2025).

Can I locate and verify where this information comes from?

What research suggests: AI-generated references and claims frequently lack reliability, with studies showing high rates of fabricated, inaccurate, or unverifiable citations. This introduces significant risk when AI outputs are treated as authoritative without verification.

Implication for research practice: Researchers must treat AI outputs as starting points for inquiry, not as verified sources.

Guidance for practice:

  • Independently verify all references and citations before use.
  • Confirm that sources exist and accurately support the claims attributed to them.
  • Use AI to assist discovery, but rely on databases, primary sources, and scholarly records for validation (Maulidiyah, 2025; Pinninti, 2025; Thandla et al., 2024; Urban et al., 2025).
Synthesis
Constructing knowledge and producing original scholarly work

Integration

How do I use AI without replacing my thinking?

What research suggests: AI can support writing and synthesis, but passive reliance can reduce originality and depth.

Implication for research practice: Researchers must stay actively engaged and use AI to support, not replace, their thinking.

Guidance for practice:

  • Use AI after developing your own initial ideas or structure.
  • Review and revise AI output rather than accepting it directly.
  • Stay engaged in drafting and revision (Nguyen et al., 2024; Pryma et al., 2025).

Authorship and Voice

What is my intellectual contribution?

What research suggests: AI can shape tone and structure, which may reduce visibility of the writer’s own perspective.

Implication for research practice: Researchers must ensure that their ideas and interpretations remain central.

Guidance for practice:

  • Ensure your main ideas and arguments come from your own thinking.
  • Use AI for support, not to generate your core content.
  • Check that your voice is present in how ideas are explained and developed (Tekir, 2026; Jacob et al., 2024).

 

Applying the Model: Tools and Workflows

  1. Surveying the Site (Discovery & Orientation): Semantic Scholar, ResearchRabbit, LitMaps, Connected Papers, Consensus
  2. Excavating & Sorting (Evidence Extraction): Elicit
  3. Mapping the Landscape (Research Networks): ResearchRabbit, LitMaps, Connected Papers
  4. Authentication & Context (Source Verification): OpenAlex
  5. Library as Conservation Authority (Scholarly Access & Authority): PsycINFO, CINAHL, Web of Science, library databases
  6. Synthesis & Interpretation (Evidence Summaries & Writing): Consensus, NotebookLM, Copilot, ChatGPT
  7. Scholar as Final Authority (Judgment & Authorship): You — verification, disclosure, and scholarly responsibility

Scholarly principle: AI can accelerate discovery and drafting, but credible scholarship ultimately depends on verified sources, library systems, and human judgment.

I. Ethical, Transparent & Reproducible AI Use
 

This section provides grounding in ethical, transparent, and reproducible AI practices for UW Oshkosh faculty. The goal is to help you use AI tools confidently while aligning with scholarly, disciplinary, and institutional norms. While policies vary across journals and agencies, several shared principles have emerged. The sections below summarize widely accepted trends and provide faculty-friendly guidance.

βœ… Responsible AI Tool Use (UW Guidance)

This resource includes both UW-approved (campus-vetted) AI tools and public tools (shared for awareness and optional exploration). These public tools are currently undergoing the campus review process.

Per UW System Administrative Policy SYS 1031, generative AI tools may be used with low risk data (e.g., public information) or when no institutional data is shared. Do not enter confidential, protected, or personally identifiable information (PII) into AI tools—regardless of support status (e.g., student records, unpublished manuscripts, grant/IRB-sensitive content, restricted datasets).

For research, faculty may use public research tools (e.g., Consensus, Elicit) for literature discovery and synthesis using public/low-risk sources. Avoid uploading unpublished work, proposals, peer reviews, student work, or restricted datasets into public AI platforms.

πŸ‘€ A. Humans Are Always the Authors

Across disciplines and publishers, there is unanimous consensus: AI cannot be listed as an author. Human researchers retain full responsibility for:

  • verifying factual accuracy,
  • editing and interpreting all AI-assisted content,
  • ensuring originality and scholarly integrity.

Humans must maintain 100% accountability for published work.

πŸ”’ B. Protect Unpublished Work & Ensure Originality

Do not upload confidential manuscripts, proposals, student work, or peer reviews into public AI tools. This includes:

  • unpublished data or manuscripts,
  • grant applications or reviews,
  • manuals, proprietary materials, or IRB-sensitive content.

Your manuscript must always reflect your own scholarly contribution.

πŸ“„ C. Disclose Meaningful AI Assistance

Most journals require disclosure when AI meaningfully shapes the intellectual content of a manuscript.

Disclosure typically required for AI use in:

  • drafting or rewriting text,
  • summarizing or restructuring content,
  • brainstorming or research design,
  • coding, data analysis, or image processing.

Disclosure typically not required for:

  • light grammar, spelling, or formatting edits,
  • citation-style conversions (e.g., APA ↔ MLA),
  • basic proofreading.

Typical disclosure placement: acknowledgments or methods section.

Must include:

  • tool name,
  • model or version (e.g., GPT-5.1),
  • purpose and scope of use,
  • confirmation of human verification.

πŸ“ D. Example AI Use Statements

Sample statements you can adapt:

“ChatGPT (GPT-5.1) was used to summarize background literature on [topic] and to suggest alternative phrasings. All AI-generated outputs were reviewed, verified, and revised by the author.”
“Consensus was used to review peer-reviewed evidence on [topic]. Final interpretation and synthesis were performed by the author.”
“NotebookLM was used to generate summary tables from a curated set of uploaded PDFs. All interpretations, analyses, and final text were developed by the author.”

πŸ“š F. Well-Known Publications & Bodies with AI Guidance

Many major journals and funding bodies now publish guidance on AI use. While details vary, common themes include: AI cannot be an author, meaningful AI assistance must be disclosed, and reviewers must not upload confidential manuscripts or proposals into AI tools.

Discipline Publication / Body AI Guidance (Summary) Policy Page
Economics American Economic Review (AER) AI software may not be listed as an author. Use for drafting or editing must be briefly described/disclosed during submission. Authors are solely accountable for fact-checking AI outputs. American Economic Review editorial policy (AI guidance)
Political Science American Journal of Political Science (AJPS) AI must be disclosed for any element, including copyediting or writing code. Authors should avoid using AI to write the manuscript or substantial elements such as the literature review. Reviewers cannot use AI to directly evaluate or write any part of the review report. AJPS author guidance and policies (check AI disclosure requirements)
Humanities / Language PMLA (Modern Language Association) AI tools cannot be listed as an author. Authors must fully cite all AI-created content (text, images, data) used in the manuscript. PMLA manuscript submission guidelines (AI content citation/disclosure)
Medicine / Health JAMA Network Journals AI cannot be listed as an author. Authors must report AI use in manuscript preparation (tool, version, dates, description) in the Acknowledgment section. AI used in research methods must be detailed in the Methods section for reproducibility. JAMA guidance on generative AI use (reporting and disclosure)
Chemistry / ACS ACS Publications Journals AI tools cannot be authors. All use must be disclosed in the Acknowledgments/Methods. AI-generated table-of-contents graphics are prohibited. Reviewers must not upload manuscripts to generative AI tools (breach of confidentiality). ACS Publications AI policy (authors and reviewers)
Technology / Engineering IEEE Publications Disclosure required in the acknowledgments for AI-generated content (text, figures, images, code), identifying the system and extent of usage. Disclosure is recommended even for AI editing/grammar enhancement. IEEE author guidelines for AI-generated text (disclosure requirements)
Multi-Disciplinary (Publisher) Elsevier Journals AI tools cannot be authors. Disclosure is required for writing/preparation use. Generative AI cannot create or alter images (unless AI itself is the subject of the research). Reviewers must not upload manuscripts to generative AI tools. Elsevier generative AI policies for journals (author/reviewer guidance)
Multi-Disciplinary (Publisher) Nature Portfolio Journals (Springer Nature) LLMs do not satisfy authorship criteria. Generative AI images are generally not permitted for publication. Reviewers are asked not to upload manuscripts into generative AI tools. Nature Portfolio editorial policy on AI (authorship and images)
Multi-Disciplinary (Open Access) PLOS Journals AI tools cannot serve as authors or reviewers. AI use must be disclosed in the Methods or Acknowledgements (including tool name and how outputs were evaluated). Editors and reviewers should not upload submissions to generative AI tools due to confidentiality. PLOS ethical publishing practices (AI disclosure and confidentiality)
Research Funding (Review) National Institutes of Health (NIH) Reviewers are prohibited from using generative AI to analyze or formulate critiques for grant applications due to confidentiality concerns. NIH notice: prohibited generative AI use in peer review (NOT-OD-23-149)
Research Funding (Review) National Science Foundation (NSF) Reviewers are prohibited from uploading content from proposals or review records to non-approved generative AI tools (violates confidentiality). Proposers are encouraged to disclose AI use in proposal development. NSF notice to the research community on AI (proposal and review guidance)
II. Step 1–3: Discovery & Literature Review Tools

 

Decision focus: What information is worth pursuing, and where does it come from?

 

 

Use these tools in sequence to move from broad discovery to verified, citable scholarship. AI can reveal patterns and connections quickly; library databases provide authoritative full text and support reproducibility.

 

Two common ways to begin discovery:

  • Article and citation based discovery: Explore existing research using tools like Semantic Scholar or Connected Papers
  • AI-mediated topic/question exploration: Start with a research question and use AI tools (e.g., Consensus) to generate and refine researchable questions

Both approaches support early-stage discovery and should lead to database searching for comprehensive and reproducible results.

Step 1: Surveying the Site (Discovery & Orientation)

This stage focuses on exploring the research landscape and identifying key sources.

πŸ“š A. Semantic Scholar & Connected PapersTrustworthy Topic Overviews

Semantic Scholar

  • Identify influential and highly cited articles
  • Browse concise summaries and key terms
  • Locate foundational or “must-read” papers

Connected Papers

  • Visualize relationships between papers using citation networks
  • Trace a topic’s intellectual lineage (prior and derivative works)
  • Identify emerging or understudied areas

 

⚠️ Keep in mind: These tools reflect what is well-indexed and highly cited. They may underrepresent newer, niche, interdisciplinary, or book-based scholarship. Confirm gaps using disciplinary databases and the library catalog.

πŸ“˜ B. ConsensusTopic/Question-to-Research Bridge

Best for: Early discovery when you want to move from a broad topic, area of interest, rough prompt, or initial question toward focused, researchable questions grounded in existing literature. Very useful for upper level research classes exploring a variety of paper topics.

 

Helps you:

  • Refine broad topics or initial questions into specific research questions
  • Identify key terms and concepts
  • Preview how research addresses a topic or question

 

⚠️ Best used for orientation and question formation. Follow with database searching to ensure completeness and depth.


Step 2: Excavating & Sorting (Extraction)

Use Elicit to pull out study-level fragments quickly (methods, measures, samples, outcomes) and organize them for comparison.

πŸ”Ž A. Elicit – “The Methods & Patterns Tool” ⚠️ Public Generative AI Tool

Best for: Rapidly orienting to a topic and extracting study details.

  • Surface relevant studies from a research question.
  • Extract methods, outcomes, measures, and sample sizes.
  • Build quick tables of study characteristics.
  • Spot gaps, contradictions, and methodological patterns.

 

⚠️ Keep in mind: Elicit is great at finding relevant materials fast (search precision). Some recent studies have indicated that its ability to find all or most relevant items on a given specific search is a shortcoming (search sensitivity). For comprehensive searches, use disciplinary databases (e.g., PsycINFO, PubMed, ERIC, Web of Science) and treat Elicit as a discovery and extraction accelerator—not a replacement for indexed search. Reach out to Polk Library's Faculty Services Librarian, Erin McArthur if you would like assistance performing a comprehensive literature review.


Step 3: Mapping the Landscape (Citation Networks)

Use mapping tools to see clusters, influential hubs, and how ideas connect across time.

πŸ—ΊοΈ B. ResearchRabbit & LitMaps – “The Citation Network Explorers” πŸ“š Open Research Source (Public Data)

Best for: Seeing the landscape and evolution of a field.

  • Visualize clusters of related research around a topic. (both)
  • Identify foundational “hub” articles and leading authors. (both)
  • Trace citation trails forward and backward in time. (both)
  • Surface topically similar papers that do not cite each other (ResearchRabbit)
  • Build a reproducible, citation-only expansion from a seed paper (LitMaps)
  • Maintain a living map with automated alerts for new citing work (LitMaps)

 

⚠️ Keep in mind: These tools rely heavily on citation networks, which can amplify mainstream or highly cited clusters and miss new, niche, interdisciplinary, or book-based scholarship. Use them to expand discovery—but confirm completeness with disciplinary databases and the library catalog.

III. Step 4–5: Authentication and Conservation Authority

 

Decision focus: Is this information accurate, credible, and appropriate for use in this context?

 

Step 4: Authentication & Context (Verification)

 

Use a verification helper to confirm sources exist, assess credibility, and understand how findings are received in later scholarship.

βœ” D. OpenAlex – “Verification Helper” πŸ“š Open Research Source (Public Data)

After AI or mapping tools suggest possible articles, use OpenAlex to confirm that sources are real, relevant, and credible.

OpenAlex

  • Verify that a citation actually exists.
  • Check concepts, fields, and publication venues.
  • Locate related works for deeper searching.

 

⚠️ Keep in mind: OpenAlex coverage varies by discipline because fields publish in different formats and use identifiers differently (e.g., DOIs/ORCIDs are more common in some areas than others). Records may be uneven when upstream metadata is incomplete or when author/institution matching is ambiguous.

 

⚠️ Predatory journals: While there is no single guaranteed list of predatory journals, you can evaluate signals in an OpenAlex profile for a given journal. Absence from DOAJ (Directory of Open Access Journals) is a major red flag. A very low citation impact score is another red flag. Missing publisher information is an additional red flag. Here is an example of a likely very low-quality or predatory journal entry. If you are suspicious, review the editorial board and assess additional indexes (Scopus, Web of Science, PubMed, where field-appropriate).


Step 5: Library as Conservation Authority (Full Text & Citable Record)

Use library databases to retrieve vetted full text, apply precise filters, and document a reproducible search.

πŸ”— E. AI (e.g., Microsoft Copilot βœ… Campus Approved) + Library Databases – “Your High-Precision Search Team”

AI can help you shape and refine search strategies. Library databases deliver the vetted, peer-reviewed, citable scholarship.

 

AI can help you:

  • Brainstorm keywords and related concepts.
  • Translate a vague idea into a structured question.
  • Draft Boolean search strings.
  • Identify alternate populations, outcomes, or contexts.

Library databases help you:

  • Retrieve authoritative, peer-reviewed articles.
  • Apply precise filters (method, date, population, study type).
  • Access full text and citation tools.
  • Ensure accuracy, comprehensiveness, and reproducibility.
Try this prompt:
Turn this research question into a precise Boolean search strategy suitable for PsycINFO:
“How do first-generation college students experience belonging in STEM disciplines?”
Suggested discovery and authentication workflow:

1️⃣ Semantic Scholar / Connected Papers → orient to key papers & themes
2️⃣ Elicit → extract methods and study details
3️⃣ ResearchRabbit / LitMaps → map the citation landscape
4️⃣ OpenAlex & Scite → verify and contextualize
5️⃣ Library databases → retrieve and evaluate full text
6️⃣ Zotero → organize, annotate, and cite

IV. Step 6–7: Synthesis, Interpretation & Scholarly Authorship

 

Decision focus: How can I use AI to support synthesis while maintaining interpretation, authorship, and responsibility?

 

Step 6: Synthesis & Interpretation (Evidence Summaries & Writing)

 

Use these tools after you have identified and verified your sources. They help you compare studies, summarize evidence, structure an argument, and revise writing—while you remain responsible for accuracy and interpretation.

πŸ“˜ A. Consensus – “The Evidence Synthesizer” ⚠️ Public Generative AI Tool

Best for: Understanding what the research collectively says on a question. It can compare against the literature set you have acquired; it is a comparison and triangulation layer that helps you see how your findings align with the broader evidence base.

  • Summarizes findings across multiple studies on a specific question.
  • Shows areas of agreement, disagreement, or uncertainty in the literature.
  • Helps you distinguish well-supported claims from emerging or contested ones.
  • Provides citations you can export into Zotero or other reference managers.

 

⚠️ Keep in mind: Consensus can feel definitive, but coverage is not comprehensive and interpretations can oversimplify. Verify claims by reading the cited studies (methods + results).

πŸ“‚ B. NotebookLM βœ… Campus Approved – “The Corpus-Based Synthesizer”

Best for: Deep synthesis within a curated set of documents you choose.

  • Upload PDFs, research notes, transcripts, or article collections.
  • Ask questions that stay strictly within your selected corpus.
  • Generate tables of methods, samples, measures, or findings.
  • Compare themes, constructs, or theoretical frameworks across studies.

πŸ“ C. Microsoft Copilot βœ… Campus Approved – “The Drafting & Revision Partner”

Best for: Structuring, revising, and adapting writing directly within Word, PDF, PowerPoint, and Excel using the institution-supported subscription.

  • Summarizes long documents (articles, literature review drafts, grant sections).
  • Drafts and revises outlines, section headings, and argument structure.
  • Rewrites paragraphs for clarity, concision, or different audiences.
  • Translates jargon-heavy passages for reviewers, students, or non-specialists.
  • Explains complex content in plainer language while preserving technical accuracy.

⚠ E. Caution with Drafting – “You Stay the Author”

AI can help you draft, revise, and synthesize—but it cannot take responsibility for your scholarship. As you use any of these tools:

  • Always cross-check AI-generated claims against the original sources.
  • Verify that citations exist and are appropriate for your topic.
  • Confirm interpretations through your own reading and disciplinary judgment.
  • Remember that you are responsible for the arguments, framing, and conclusions.

Use AI to accelerate the mechanics of synthesis and writing, while preserving your role as the scholar, interpreter, and author.

Suggested Step 6 workflow:

1️⃣ Library databases → retrieve and read full text
2️⃣ Zotero → annotate, tag, and organize what you will actually use
3️⃣ Consensus → scan what the broader literature tends to conclude (where appropriate)
4️⃣ NotebookLM → interrogate your curated PDF set for patterns, comparisons, and tables
5️⃣ Copilot → outline, draft, and revise with your verified sources open


Step 7: The Scholar as Final Authority

 

AI can assist with drafting and synthesis, but you remain responsible for interpretation, evidence selection, framing, and final claims. When AI contributes meaningfully, disclose use and verify all citations against the original sources.

 

V. Prompt Library for Faculty Research

 

Workflow position: This section primarily supports Step 2 (Excavating & Sorting), Step 5 (Library as Conservation Authority), and Step 6 (Synthesis & Interpretation).

 

These prompts are starting points designed to accelerate your research, writing, and evaluation workflows. Customize them with your discipline, methods, datasets, and preferred tools. Always verify AI outputs and integrate them with your own scholarly expertise and database searching.

 

Tool note: These prompts can be used with the AI tools listed on this page (e.g., ChatGPT, Microsoft Copilot, Elicit, Consensus, NotebookLM). Choose the tool based on your task and whether you can provide or limit the source material.

πŸ”Ž A. Literature Review Prompts (e.g., Elicit, Consensus, NotebookLM, ChatGPT, Copilot)

Evidence Mapping Prompt

“You are an expert research assistant in [discipline]. Using ONLY the articles I provide, create a concise synthesis of the main findings on [topic]. Highlight areas of agreement and disagreement among studies, and identify any gaps or unresolved questions noted by the authors.”

Search Term Expansion Prompt

“Based on this research question—[paste question]—generate synonyms, related terms, and possible subject headings for database searching in [discipline]. Return results in a table with the columns: concept | synonyms | possible subject terms.”

πŸ’° B. Grant & Funding Prompts (e.g., ChatGPT, Copilot)

NIH-Style Specific Aims Starter

“Using NIH ‘Specific Aims’ conventions, help me draft a 1-page aims outline for a project on [topic] in [discipline]. Emphasize significance, innovation, and a clear 2–3 aim structure. Ask me for missing details before drafting.”

Reviewer-Friendly Summary

“Rewrite this technical abstract for a multidisciplinary review panel. Maintain accuracy but emphasize significance, clarity, and broader impacts: [paste abstract].”

πŸ§ͺ C. Methodology & Research Design Prompts (e.g., ChatGPT, Copilot, Consensus)

Design Comparison Prompt

“Compare at least three research designs suitable for studying [topic] in [discipline]. For each design, list advantages, limitations, typical sample sizes, and threats to validity. Include citations to standard methods texts where possible.”

Instrument / Measure Scan Prompt

“List commonly used instruments or measures for studying [construct] in [population]. For each, describe what it measures, typical reliability/validity evidence, and relevant citations.”

πŸ“Š D. Data Interpretation & Communication Prompts (e.g., ChatGPT, Copilot, NotebookLM)

Plain-Language Results Explanation

“Explain the following results in plain, accessible language suitable for a grant reviewer or educated public, focusing on meaning and implications rather than technical detail: [paste results].”

Limitations and Next Steps Prompt

“Given these findings—[brief summary]—identify 3–5 realistic limitations and 3–5 logical directions for future research, aligned with norms in [discipline].”
VI. Integrating AI with Zotero and Library Services

 

Workflow position: Zotero and library services support every step of the workflow, with particular importance for Step 5 (Library as Conservation Authority) and Step 6 (Synthesis & Writing).

 

AI tools become significantly more powerful when paired with the licensed databases, research infrastructure, and expert guidance available through Polk Library. This section outlines practical ways to integrate AI-assisted discovery and synthesis with Zotero, library databases, and librarian support.

πŸ“š A. Using Zotero to Manage AI-Discovered Literature

Zotero helps transform AI-suggested articles into a verified, organized, and citable research library.

 

  • Import AI-suggested citations using DOI, PMID, or the Zotero Connector.
  • Create collections such as to verify, include, exclude, and maybe.
  • Attach PDFs from library databases.
  • Use notes and tags to document screening and relevance decisions.

πŸ”— B. Using AI to Strengthen Database Searching

AI can help refine search logic, but library databases ensure rigor and reproducibility.

  • Generate keyword lists, synonyms, and Boolean strings with AI.
  • Translate research questions into database-ready queries.
  • Apply controlled vocabularies and filters in PsycINFO, CINAHL, Web of Science, etc.
  • Verify journals, populations, and methods against disciplinary norms.

🀝 C. Working with Librarians for AI-Informed Research Support

Polk Library supports ethical, effective AI use across the research lifecycle. For guidance, reach out to

Joe Pirillo, Polk Library’s AI lead contact.

 

  • Co-develop AI-informed search strategies.
  • Evaluate AI-generated citations and claims.
  • Support Zotero workflows and citation management.
  • Advise on AI disclosure and transparency statements.
  • Provide workshops for departments or research teams.

πŸ“ D. Putting It All Together

Used together, AI tools, library databases, Zotero, and librarian expertise form a complete scholarly research ecosystem:

  • AI identifies candidate articles →
  • Library databases verify and retrieve full text →
  • Zotero organizes and documents decisions →
  • AI assists synthesis and drafting →
  • You provide interpretation, judgment, and authorship.
Research Foundations: Decision Points in AI-Supported Writing

AI-supported writing involves a series of decisions—when to use AI, how much to rely on it, and how critically to

evaluate its output. Research shows AI can shape drafting, revision, reasoning, and student voice, making it essential

to decide whether to accept, revise, or reject outputs. The student remains the final authority—responsible for judgment, verification, and authorship.

Key takeaway: Use AI to support your thinking—not replace it.

Jacob et al. (2024); Pryma et al. (2025); Shen & Chen (2025); Tekir (2026); Thandla et al. (2024); Urban et al. (2024).

Show full APA references
Jacob, S. R., Tate, T., & Warschauer, M. (2024). Emergent AI-assisted discourse: A case study of a second language writer authoring with ChatGPT. Journal of China Computer-Assisted Language Learning, 4(1), 1–25. https://doi.org/10.1515/jccall-2024-0011
Pryma, V., Pelivan, O., Teletska, T., Tsobenko, O., & Zagrebelna, N. (2025). AI writing assistants and student competence: A linguistic aspect. Arab World English Journal, Special Issue on Artificial Intelligence, 319–329. https://doi.org/10.24093/awej/ai.18
Shen, Y., & Chen, L. (2025). ‘Critical chatting’ or ‘casual cheating’: How graduate EFL students utilize ChatGPT for academic writing. Computer Assisted Language Learning. Advance online publication.
Tekir, S. (2026). Generative AI use in EFL writing: Associations with originality, critical reasoning, and metacognitive engagement in a Turkish higher education context. Computer Assisted Language Learning. Advance online publication. https://doi.org/10.1080/09588221.2026.2617399
Thandla, S. R., Armstrong, G. Q., Menon, A., Shah, A., Gueye, D. L., Harb, C., Hernandez, E., Iyer, Y., Hotchner, A. R., Modi, R., Mudigonda, A., Prokos, M. A., Rao, T. M., Thomas, O. R., Beltran, C. A., Guerrieri, T., LeBlanc, S., Moorthy, S., Yacoub, S. G., ... Zimmerman, P. A. (2024). Comparing new tools of artificial intelligence to the authentic intelligence of our global health students. BioData Mining, 17(1), Article 8. https://doi.org/10.1186/s13040-024-00408-7
Urban, M., Brom, C., Lukavský, J., DΔ›chtΔ›renko, F., Hein, V., Svacha, F., Kmoníčková, P., Urban, K., & Hei, V. (2024). “ChatGPT can make mistakes. Check important info.” Epistemic beliefs and metacognitive accuracy in students’ integration of ChatGPT content into academic writing. Computers & Education: Artificial Intelligence, 6, Article 100197. https://doi.org/10.1016/j.caeai.2024.100197
Are these tools freely accessible?

 

Where does all the information come from for these AI tools?

 

I have a lot of articles where I would like to extract and compare data, what can I use?

 

Do I need paid subscriptions to use this workflow?

 

Which tools are campus-provided?

 

What should I avoid entering into generative AI tools?

 

How do I keep AI-assisted work rigorous and reproducible?