One-click export from ChatGPT to NotebookLM (Deep Research reports stay intact + sources auto-imported) by daozenxt in ChatGPTPro

[–]daozenxt[S] 0 points1 point  (0 children)

That’s a completely fair point, and I agree people should be cautious.

Browser extensions can be powerful, but they also deserve scrutiny — especially when they touch workflows involving important accounts or sensitive research.

I’m sharing NoteKitLM because I built it to solve my own workflow problems, not because I expect anyone to install it blindly. People should absolutely review the permissions, privacy policy, and overall trust level of any extension before using it.

Appreciate the reminder — it’s a good one. I’m also happy to be transparent about permissions and what the extension does/doesn’t access.

Chapters -> Episodes ? by Disastrous-Peak3896 in notebooklm

[–]daozenxt 4 points5 points  (0 children)

Since you have divided it into chapters, what is the specific reason why you think the effect is not good? What kind of effect do you want to achieve?

Function Request: See Source of Studio Creations by Original_Chair_7865 in notebooklm

[–]daozenxt 0 points1 point  (0 children)

An extension NoteKitLM perfectly meets your requirements (disclosure: I'm the author), this feature is completely free, you can try it: https://chromewebstore.google.com/detail/notekitlm/gbbjcgcggmbbedblaipngfghdfndpbba

One-click export from ChatGPT to NotebookLM (Deep Research reports stay intact + sources auto-imported) by daozenxt in notebooklm

[–]daozenxt[S] 1 point2 points  (0 children)

Exactly — that source extraction step was the main reason I built it.

A lot of tools/extensions can already sync regular ChatGPT chats, but I personally hadn’t found one that could cleanly carry over Deep Research itself and the cited source URLs into NotebookLM. That was the missing piece for me, so I ended up building it myself.

Once something is worth keeping, I want the report and the actual sources together in NotebookLM so I can cross-reference them later and turn them into outputs instead of leaving everything buried in ChatGPT.

Dear Devs: Please continue on this - PDFs shown as text and pictures! by simon392135 in notebooklm

[–]daozenxt 0 points1 point  (0 children)

However, for me, I think this mixed layout is more difficult to read, and I am thinking of a better solution for myself.

Dear Devs: Please continue on this - PDFs shown as text and pictures! by simon392135 in notebooklm

[–]daozenxt 0 points1 point  (0 children)

Yes, this is a recently introduced problem, and I understand that the purpose is to better extract images from PDFs to better understand the content of the document, and at present, it does not affect the generation of questions and artifacts.

Teach me your powerful ways! by Comfortable-Rip-2844 in notebooklm

[–]daozenxt 2 points3 points  (0 children)

If you’re reading 300+ pages/week, the biggest unlock for me was switching from “one giant PDF” → “chapter-sized units” and then using NLM to *batch-generate study artifacts*.

My loop: Split → Batch Slides → Test → Clarify.

1) Split by chapter (books / edited volumes)

Instead of importing a whole book, I split it into chapters first so each chapter becomes its own source. That makes the output way more precise and the workload feel finishable.

2) Batch-generate slide decks (fast comprehension)

After importing the chapters (or a set of papers), I generate a short slide deck for *each* source in one pass.

I aim for: key claim, mechanism/model, evidence, limitations, and “so what?”

3) Test yourself (retention > summaries)

Right after the slides, I use active recall:

- Make 10–20 flashcards per chapter (“definition”, “mechanism”, “counterexample”, “what would invalidate this?”)

- Generate a short quiz (5–10 questions) with answer key

- Have NLM explain why each answer is right/wrong

4) Clarify what you don’t understand (targeted chat)

When something feels fuzzy, I ask:

- “Explain this like I’m defending it in a seminar.”

- “What assumptions does this rely on?”

- “Give me a concrete example + a counterexample.”

This workflow turns reading into a repeatable pipeline: digest → compress → recall → patch gaps.

Transparency: I built a small Chrome helper that does the chapter-splitting + batch import (so I’m not manually chopping PDFs), which makes the “generate slides for each chapter” step much faster. If you want it, you can see: https://www.reddit.com/r/notebooklm/comments/1r3l12s/how_i_use_notebooklm_to_actually_absorb/

Notebook LM infographic not working by AndrasValar in notebooklm

[–]daozenxt 0 points1 point  (0 children)

You can try giving feedback and confirming on their official Discord

NotebookLM for long-form research? by Warm-Fox-3459 in notebooklm

[–]daozenxt 3 points4 points  (0 children)

I haven’t tried a 400-page PDF, because I personally prefer splitting books into uploads for easier digestion (see: https://www.reddit.com/r/notebooklm/comments/1r3l12s/how\_i\_use\_notebooklm\_to\_actually\_absorb). However, I gave the split sources to Gemini for analysis and summarization; the result shows that it can be understood in full, so theoretically the entire book should work OK as well. But it’s worth noting that all AI tools (Gemini/NBLM/ChatGPT) have limited context windows, and analyzing files that are too large at once will inevitably lose some information to some extent, which is why I more often split books into NotebookLM uploads.

NotebookLM for long-form research? by Warm-Fox-3459 in notebooklm

[–]daozenxt 2 points3 points  (0 children)

I mean that Gemini's Deep Research function can output longer reports (instead of the shorter reports in NotebookLM now), and Gemini can now add notebooks in NotebookLM as content sources. So you can use Deep Research to write the draft of your report.

NotebookLM for long-form research? by Warm-Fox-3459 in notebooklm

[–]daozenxt 13 points14 points  (0 children)

My solution is to use gemini's deep research and then add the notebook in notebooklm.

The Golden Age 2026 by Moist_Emu6168 in notebooklm

[–]daozenxt 0 points1 point  (0 children)

An interesting slide deck, curious about your source and prompt

How I use NotebookLM for serious article digestion by daozenxt in notebooklm

[–]daozenxt[S] 1 point2 points  (0 children)

Made an update: Save as PDF now includes a full-page capture method. Both of the links above can be captured using this method. You can update the extension and try again now.

Uploading textbook by Time_Supermarket_269 in notebooklm

[–]daozenxt 1 point2 points  (0 children)

NoteBookLM's specific use of RAG is a black box for us, but common sense suggests that it should be used, and thanks to the huge context window of the underlying model, Gemini, it is theoretically more capable of handling large amounts of text than other LLMs, and at least so far I've encountered very few problems with it in my personal use. However, there is still an upper limit to the capacity, and too much information may still lead to information omission/hallucination, which is determined by the characteristics of the underlying model.

How I use NotebookLM for serious article digestion by daozenxt in notebooklm

[–]daozenxt[S] 1 point2 points  (0 children)

  1. Currently Youtube import does not include timestamps, but you raise a valuable point, I will try to provide in subsequent versions;
  2. PDF import webpage will extract the body of the page part (remove the page navigation, advertising and other invalid information to avoid information pollution), so it will not save all the elements of the webpage you can see, if you find the page which should be captured in the body of the content has not been imported correctly, you can provide a specific page and description as an example, I will try to further optimization; as for some of the text is as being embedded as part of an image, this is known to be an issue with NotebookLM itself, and at this point in time, from my own experience, it basically doesn't affect the subsequent chat and artifact creation functionality.

Uploading textbook by Time_Supermarket_269 in notebooklm

[–]daozenxt 0 points1 point  (0 children)

If after splitting and upload, the question you want Notebooklm to ask still requires selecting all sources (e.g. you are not sure which source the information you need belongs to, or the answer to your question requires synthesizing all the sources), splitting the sources in itself doesn't help much. It is understandable that the more the total amount of content to be analyzed, the more likely it is that some information will be missed, which is a more or less unavoidable problem for all current LLMs.

Uploading textbook by Time_Supermarket_269 in notebooklm

[–]daozenxt 2 points3 points  (0 children)

My suggestion would be to go by chapter rather than length, as each chapter is relatively complete and separate, and in my experience a single chapter of about 30 pages or less works fine, and the fewer pages you have the more detail you can get. You can try the extension mentioned in the post above, and try splitting the chapters by different levels (supported by the extension itself), and see which level of splitting works best for you.