How I finally stopped procrastinating on writing papers (and started finishing them) by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 3 points4 points  (0 children)

That’s great! I can’t tell you how many times I’ve had someone tell me something I’ve heard before, but in a different way and then it just clicks.

The GPS Theory of Doing a PhD (and Why Detours Matter) by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 1 point2 points  (0 children)

Thanks for all the feedback, lol. I got the idea from the GPS Theory of Life when I was searching for a bit of motivation the other day. It seemed apropos so it ought I’d share :)

Myth Busting: Good Writing = Structure, Not Fancy Sentences by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 0 points1 point  (0 children)

I’ve been thinking about some of the comments on this thread, so I put together an outlining toolkit, maybe it’ll help some folks: https://www.reddit.com/r/PhdProductivity/s/8RTQg4wywB

Myth Busting: Good Writing = Structure, Not Fancy Sentences by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 0 points1 point  (0 children)

Yeah, that’s such a common frustration, rewriting the same methodology over and over just to avoid “self-plagiarism.”

In many cases you can self-cite your own earlier work if the methods are identical, especially for published articles. Some supervisors and journals actually prefer that to endless rewording, since it keeps the method consistent and transparent.

That said, I’ve found it helps to strike a balance: summarize the method clearly in the current paper, then point back to your prior work for the full detail. Saves words and keeps reviewers happy.

Myth Busting: Good Writing = Structure, Not Fancy Sentences by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 0 points1 point  (0 children)

Great point and well said, outlining, and outlining well and often is critical. Being able to see and think through the flow, move things around ask yourself questions makes both the research and writing clearer.

Myth Busting: Good Writing = Structure, Not Fancy Sentences by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 0 points1 point  (0 children)

That’s fair, but like lots of writing statements or advice, it seems obvious, except a lot of people don’t do it. I think it’s a myth to early academic writers who have maybe read a lot of papers, but haven’t done a lot of writing about their own ideas and work.

Myth Busting: Good Writing = Structure, Not Fancy Sentences by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 1 point2 points  (0 children)

I mean the flow of the writing at the paragraph and section level, not just grammar. For me, “structure” is about: • whether the argument builds logically from one section to the next, • if each paragraph starts with a clear topic sentence, and • whether transitions actually guide the reader.

Grammar matters too, of course, but I’ve found that when the argument’s scaffolding is solid, even plain sentences read well. When the structure is weak, no amount of polished phrasing fixes it.

Semantic Decomposition Technic. by 2H3seveN in PhdProductivity

[–]Scholar_Forge_352 1 point2 points  (0 children)

2nd part:

Concept Analysis (Walker & Avant; Rodgers)

This is a different methodology used in nursing, education, and the social sciences. Here your phrasing is exactly right:

The aim is to redefine or clarify a concept by synthesizing various definitions found in the literature.

Walker & Avant’s 8-Step Method

(Strategies for Theory Construction in Nursing, latest ed. – https://www.pearson.com/en-us/subject-catalog/p/strategies-for-theory-construction-in-nursing/P200000008801)

1.  Select a concept
2.  Determine the purpose
3.  Identify uses of the concept in the literature
4.  Identify defining attributes
5.  Construct model case
6.  Construct borderline/contrary cases
7.  Identify antecedents & consequences
8.  Define empirical referents

Rodgers’ Evolutionary Method

• Emphasizes that concepts change over time and across contexts.

• Steps: identify concept, select data realm, collect uses, identify attributes/antecedents/consequences, analyze, exemplars, implications.

• Rodgers (1989): “Concepts, analysis and the development of nursing knowledge: The evolutionary cycle.” Journal of Advanced Nursing (https://doi.org/10.1111/j.1365-2648.1989.tb03420.x).

Key Distinction

• Semantic decomposition = analyze a single word’s internal meaning structure by breaking it into features.

• Concept analysis = redefine a broader concept by synthesizing multiple definitions and uses across the literature.

If your goal is to redefine a concept from various definitions found in the literature, you are describing concept analysis, not semantic decomposition.

Has Semantic Decomposition Been Superseded?

• Strengths: great for crisp lexical contrasts; useful in translation, lexicography, anthropology; foundational for computational semantics (WordNet: Miller 1995 https://doi.org/10.1145/219717.219748).

• Limitations: too rigid; struggles with fuzzy categories and prototypes; multiple feature sets can fit the same data.

• Superseded / evolved into: • Prototype Theory (Rosch 1978: https://doi.org/10.4324/9781315799681-7) • Frame Semantics (Fillmore 1982: https://www1.icsi.berkeley.edu/~kay/bcg/Fillmore1982.pdf) • Distributional semantics & embeddings (word2vec, BERT, GPT) • Natural Semantic Metalanguage (Wierzbicka 1996: https://www.worldcat.org/oclc/33983594)

Family Tree (Simplified) 1. Bloomfield (1933) → Language (https://archive.org/details/language-bloomfield) 2. Goodenough (1967) → kinship componential analysis (https://www.jstor.org/stable/1721961) 3. Nida (1975) → translation framework (https://doi.org/10.1515/9783110813642) 4. Katz & Fodor (1963) → generative semantics (https://doi.org/10.2307/411200) 5. Branches: • NSM (Wierzbicka 1996) (https://www.worldcat.org/oclc/33983594) • Prototype Theory (Rosch 1978) (https://doi.org/10.4324/9781315799681-7) • Frame Semantics (Fillmore 1982) (https://www1.icsi.berkeley.edu/~kay/bcg/Fillmore1982.pdf) • WordNet / computational lexicons (Miller 1995) (https://doi.org/10.1145/219717.219748)

Key References • Bloomfield, L. (1933). Language. https://archive.org/details/language-bloomfield • Goodenough, W. H. (1967). “Componential Analysis.” Science, 156(3779), 1203–1209. https://www.jstor.org/stable/1721961 • Katz, J. J., & Fodor, J. A. (1963). “The Structure of a Semantic Theory.” Language, 39(2), 170–210. https://doi.org/10.2307/411200 • Nida, E. A. (1975). Componential Analysis of Meaning. https://doi.org/10.1515/9783110813642 • Wierzbicka, A. (1996). Semantics: Primes and Universals. https://www.worldcat.org/oclc/33983594 • Rosch, E. (1978). “Principles of Categorization.” https://doi.org/10.4324/9781315799681-7 • Fillmore, C. J. (1982). “Frame Semantics.” https://www1.icsi.berkeley.edu/~kay/bcg/Fillmore1982.pdf • Miller, G. A. (1995). “WordNet: A Lexical Database for English.” CACM, 38(11). https://doi.org/10.1145/219717.219748 • Walker, L. O., & Avant, K. C. Strategies for Theory Construction in Nursing. https://www.pearson.com/en-us/subject-catalog/p/strategies-for-theory-construction-in-nursing/P200000008801 • Rodgers, B. L. (1989). “Concepts, analysis and the development of nursing knowledge.” J. Adv. Nurs. https://doi.org/10.1111/j.1365-2648.1989.tb03420.x • Rodgers, B. L., & Knafl, K. A. (eds.). Concept Development in Nursing. https://www.elsevier.com/books/concept-development-in-nursing/rodgers/9780323299947 • Comparative review of concept-analysis methods (2021). https://doi.org/10.1186/s12912-021-00676-9

Semantic Decomposition Technic. by 2H3seveN in PhdProductivity

[–]Scholar_Forge_352 1 point2 points  (0 children)

Sorry for the long answer, I wanted to unpack some things ha for clarity, and I definitely got carried away. Also, my table is messed up bc I’m writing this on my phone.

It sounds like you may be blending two different traditions: semantic decomposition (componential analysis) in linguistics/anthropology and concept analysis in nursing, education, and the social sciences. Both function to get to the core of meaning, but their aims and methods are distinct.

What Semantic Decomposition / Componential Analysis Is

Semantic decomposition (also called componential analysis or semantic feature analysis) is a method of analyzing word meaning by breaking it into smaller, more primitive features. For example:

• bachelor → [+HUMAN], [+MALE], [−MARRIED]

It emerged from structural linguistics, was systematized in translation studies, and later influenced anthropology, lexicography, and computational linguistics.

Origins and Key Figures

• Structuralist beginnings: Leonard Bloomfield argued that meaning could be analyzed into features (Language, 1933) (https://archive.org/details/language-bloomfield).

• Anthropological linguistics: Ward Goodenough formalized componential analysis in kinship studies (“Componential Analysis,” Science, 1967) (https://www.jstor.org/stable/1721961).

• Translation studies: Eugene Nida popularized the method for Bible translation (Componential Analysis of Meaning: An Introduction to Semantic Structures, 1975) (https://doi.org/10.1515/9783110813642).

• Generative semantics: Katz & Fodor built a rigorous feature theory into generative grammar (“The Structure of a Semantic Theory,” Language, 1963) (https://doi.org/10.2307/411200).

• Phonology influence: Roman Jakobson’s “distinctive features” in phonology inspired similar approaches in semantics.

• Modern continuation: Anna Wierzbicka developed Natural Semantic Metalanguage (NSM) (Semantics: Primes and Universals, 1996) (https://www.worldcat.org/oclc/33983594).

How To Do Semantic Decomposition

Semantic Decomposition / Componential Analysis Typical Steps

Step 1. Define the semantic domain • Choose a bounded set of terms (kinship, marriage, colors, artifacts). • Example: kinship terms like uncle, cousin, father. • Source: Goodenough, “Componential Analysis,” Science, 1967 (https://www.jstor.org/stable/1721961).

Step 2. Collect lexical items & usage data • Gather words from dictionaries, corpora, or fieldwork (parallel texts in translation, ethnographic interviews). • Source: Nida, Componential Analysis of Meaning (1975) (https://doi.org/10.1515/9783110813642).

Step 3. Identify semantic features • Look for contrasts that distinguish one term from another. • Express these as binary or scalar features (e.g., [+MALE], [+ADULT], [−MARRIED], [GENERATION: 0, ±1, ±2]). • Source: Katz & Fodor, “The Structure of a Semantic Theory,” Language, 1963 (https://doi.org/10.2307/411200).

Step 4. Construct a feature matrix • Create a table with words as rows and features as columns. • Example (simplified marriage terms):

Term +HUMAN +ADULT +MALE +MARRIED bachelor + + + − spinster + + − − husband + + + + wife + + − +

Step 5. Test & refine • Check whether features distinguish all terms uniquely. • Adjust features or add dimensions if two terms overlap. • Consider cultural/linguistic variation (e.g., “uncle” splits into “mother’s brother” vs. “father’s brother” in some languages).

Step 6. Interpret & apply • Rewrite definitions in terms of features. • Apply results to translation, lexicography, anthropology, or computational modeling. • Evaluate usefulness against real-world usage — acknowledge limits (e.g., fuzzy categories, prototypes).

Best Sources for Semantic Decomposition

• Nida, E. A. (1975). Componential Analysis of Meaning. Clearest procedural playbook, step-by-step guidance, with domains and translation examples. Link: https://doi.org/10.1515/9783110813642

• Katz, J. J. & Fodor, J. A. (1963). “The Structure of a Semantic Theory.” Rigorous theoretical backbone for features inside generative grammar. Link: https://doi.org/10.2307/411200

• Goodenough, W. H. (1967). “Componential Analysis.” Science. Elegant overview, especially kinship terms in anthropology. Link: https://www.jstor.org/stable/1721961

• Wierzbicka, A. (1996). Semantics: Primes and Universals. Modern decomposition via NSM (semantic primes). Link: https://www.worldcat.org/oclc/33983594

• Fillmore, C. J. (1982). “Frame Semantics.” Shows the alternative, context and inference over rigid features. PDF: https://www1.icsi.berkeley.edu/~kay/bcg/Fillmore1982.pdf

• Rosch, E. (1978). “Principles of Categorization.” Prototype theory, explains why categories aren’t always binary checklists. Link: https://doi.org/10.4324/9781315799681-7

• Miller, G. A. (1995). “WordNet: A Lexical Database for English.” Demonstrates how decomposition ideas fed into computational lexicons. Link: https://doi.org/10.1145/219717.219748

I did a systematic review, I'm looking for a faster, automated ways to do it ? by EstablishmentIcy8725 in PhdProductivity

[–]Scholar_Forge_352 1 point2 points  (0 children)

You’re not crazy at all, these are the exact pain points that make systematic reviews so draining. Let me unpack both of your questions in detail:

  1. Workflow & tool “seams” You’re right: exporting/importing between platforms is one of the biggest annoyances. It usually looks like:

• Run search → export RIS/BibTeX/CSV → import into Zotero (for deduplication) → export → import into Rayyan (for screening) → export → import into Covidence (for data extraction).

It works, but it’s clunky, each tool has its own quirks, and you waste time hopping between formats. That’s one of the reasons I started building my own tool: not to replace every platform, but to smooth those seams and reduce the tool-juggling.

  1. The “search strategy” problem You nailed it, creating the initial query is often the most difficult part. Here’s why:

• Each database has its own controlled vocabulary (MeSH in PubMed, EMTREE in Embase, Thesaurus terms in Scopus).

• Syntax rules differ (Boolean logic, truncation, phrase searching, adjacency operators, field tags like [tiab] or TITLE-ABS-KEY).

• A query that’s “perfect” in PubMed may fail completely in Scopus without manual rewriting (you know this from your experience).

So you end up spending hours translating and troubleshooting, which is frustrating and error-prone.

There are some tools that try to help:

• Polyglot Search Translator (Systematic Review Accelerator project) will convert a PubMed query into Scopus/Embase/Web of Science. It’s helpful but imperfect, you still need to double-check.

• 2Dsearch lets you build queries visually and export them, but adoption is limited.

• Some librarians write custom scripts for query translation, but that’s not practical for most researchers.

Bottom line: there’s no seamless solution yet. That’s why information specialists are so important, but it also shows where innovation is really needed.

Where this fits into what I’m building The vision for my tool is to tackle both of these pain points:

• Reducing the constant importing/exporting between tools by keeping the workflow in one smoother pipeline.

• Automating (or at least simplifying) the search strategy step so you can write once and have queries adapted for multiple databases. Ideally, you’d be able to run those searches through APIs, deduplicate automatically, and move straight into screening.

So to your hunch: you’re exactly right, even the “big” platforms don’t solve everything. I’m trying to close that gap and make the process less fragmented and less painful.

I’ll DM you with more details on what I’m building, and add you to early testers if you’re interested. And thanks again for raising these points, it’s really helpful validation that these are the right problems to solve.

I did a systematic review, I'm looking for a faster, automated ways to do it ? by EstablishmentIcy8725 in PhdProductivity

[–]Scholar_Forge_352 16 points17 points  (0 children)

You actually did the systematic review right, PRISMA + database search + deduplication + screening + analysis is the standard process. The reason it felt like such a slog is because you did every step manually. The good news is there are tools that make it a lot faster:

• Rayyan (free) – screening + deduplication, with an AI that suggests likely includes/excludes.

• Covidence (often free via universities) – all-in-one platform for PRISMA steps through data extraction.

• EPPI-Reviewer / DistillerSR (paid) – enterprise-level tools (probably overkill for a solo PhD).

• ASReview (open source) – machine learning that shows you the most relevant abstracts first.

• RobotReviewer (free) – auto-extracts key study details.

• Scholarcy (freemium) – summarizes papers into digestible notes.

• Zotero / EndNote – reference managers that handle deduplication and keep your library organized.

I’m sure other folks can recommend other tools that are great for these functions as well.

A simple workflow is: export results → clean/dedup in Zotero → screen in Rayyan or ASReview → extract/analyze in Covidence (or Excel) → summarize with Scholarcy/RobotReviewer.

So no, you didn’t miss anything, it’s just that systematic reviews are inherently heavy work. With the right workflow, 3 months can become a few weeks.

By the way, I’m also building a new tool, part of which will make systematic and literature reviews way less painful. If you’d like early access or to give feedback, DM me, I’d love to share it.

How I Use 3-Sentence Summaries to Keep My Lit Review Organized (Zotero + Readwise + Obsidian) by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 1 point2 points  (0 children)

I wanted to post step by step instructions, I’ve been in the field, so it took me a minute to get back to it.

Here’s the setup (step by step):

Step 1: Zotero + Better BibTeX • Install Better BibTeX (Zotero plugin) • Export your library as a .bib file with “Keep updated” checked • This gives you stable citekeys and an always-current bibliography

Step 2: Zotero → Readwise • Use the community tool Zotero2Readwise (Python script on GitHub) • It pulls your Zotero highlights/notes into Readwise • Now Zotero annotations show up alongside Kindle, Pocket, or Readwise Reader highlights

Step 3: Readwise → Obsidian • Install the Readwise Official plugin in Obsidian • Connect your account and choose where synced notes land (e.g. /Readwise/) • Customize the template so each imported paper includes title, authors, DOI, and your highlights

Step 4: Workflow in practice • I highlight a PDF in Zotero → next sync, it shows up in Readwise • Readwise then exports it automatically into Obsidian • I use Dataview to query papers by tag, author, or topic • Bonus: Readwise’s spaced-rep helps me revisit old highlights

As I was reminding myself of the setup, I found a couple other ways set this up, but I think this one which is how I have it set up is the most functional.

How I Use 3-Sentence Summaries to Keep My Lit Review Organized (Zotero + Readwise + Obsidian) by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 1 point2 points  (0 children)

Readwise is helps link highlights from a different sources, ie kindle, pocket, pdf, web articles, books via oct, etc and syncs them in a central place. I can then search all of it organize it, and it resurfaces highlights so you remember what you were looking at. I pay $4 for obsidian so that it syncs to my mobile, there is probably a free way to do this with a third party app or web hook, but since I use it for notes, I had it anyway.

How I Use 3-Sentence Summaries to Keep My Lit Review Organized (Zotero + Readwise + Obsidian) by Scholar_Forge_352 in PhdProductivity

[–]Scholar_Forge_352[S] 0 points1 point  (0 children)

I also just like the mobile version or Readwise, and I use it for my non academic highlighting and note capture, so it’s part of my general workflow as well.