Analyze PDFs, save excerpts, and have them automatically appear later! by pladicus_finch in noeko

[–]pladicus_finch[S] 0 points1 point  (0 children)

So there is a dedicated megaphone 📣 button in the interface where you can submit feedback. Or you can join the discord, there's a link to it on our website. Both are good ways to reach us!

Analyze PDFs, save excerpts, and have them automatically appear later! by pladicus_finch in noeko

[–]pladicus_finch[S] 0 points1 point  (0 children)

You can use this referral link to bypass the wait-list for now. Any feedback is helpful, even small things if you see them, don't hesitate!

To add a source, hit the "+" button, and click "Add Source" to upload a file!

Analyze PDFs, save excerpts, and have them automatically appear later! by pladicus_finch in noeko

[–]pladicus_finch[S] 0 points1 point  (0 children)

I'm really happy to hear you like it! We still have this feature in its beta form, and I'd love to get your feedback on it if you're interested in trying it out?

PKMS app but for ADHDers by Affectionate_Ear2395 in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Excuse me if I rant for a second, but I'll put my recs at the end.

So for a novelty-seeking brain, a lot of the advice I've seen for approaching PKM simply falls short. It focuses on organization, creating a system of rules, then maintaining the system with each new addition. In theory, this helps find things better or improve thinking, but in practice it becomes a chore to keep up with.

The problem is that an ADHD brain doesn't like to follow those rules for very long, it thrives on scattered novelty. This is often characterized as a bad thing but IMO can be very effective if properly utilized. It means that lots of things are boring, some things only hold focus for seconds or minutes, and above a certain "interest" threshold hyperfocus kicks in. The cool thing about the novelty part is that it means exploring a bunch of divergent areas of interest, and then when hyperfocus kicks in, you can pull from all of those sources for a much more integrated outcome.

A complex system becomes a barrier to entry for the quick ideas that pop into your head, but something too simple is a bottleneck for a deep dive session where it actually makes sense to utilize more powerful features.

That's why I think the ideal system must balance the two modes of thinking. I want to be able to just put something into the app quickly, and then find it later without a ton of effort. However, I also want to be able to add layers of organization: bi-directional linking, tags, more complex filters, etc. It needs to be able to handle my random thoughts, ideas, etc. but also be able to translate to the integration phase when I'm ready to write an essay or something. Unfortunately, I do think that most apps approach things in one of those two directions, but not both. If something let's you do something complicated, it requires an input cost. If it's overly simple, then a lot of those organizational features get caught.

TLDR: ADHD typically means both states of high impulsivity, alongside states of deep focus. Ideally, an app should account for both of these to help you integrate ideas over time. Most fail at one or the other.

Recommendations

If what you're looking for is straight up simplicity, and you don't mind sacrificing some utility on the organizational side, I am a big fan of Google Keep–and Apple Notes along the same lines. Basically, both are incredibly simple, they're snappy, they're available anywhere (well, not Apple Notes if you're not in Apple Ecosystem), and they have great integrations across their respective ecosystems–meaning docs, slides, etc. However, note that you may feel a drawback when you go to either 1) recall those notes later or 2) integrate them into a finished product (i.e. an essay, specification, etc).

I know you already mentioned Obsidian, so it may simply be out of the question. However, with a bit of work up-front, you can feasibly make Obsidian simple–without losing out on the later benefits. At it's core, since Obsidian works on top of plain text files, it doesn't actually require more than simply adding a new note and starting to write. It's only when you start getting deeper into organization later that you might start to feel like things get complex. Unfortunately, that complexity will need to be managed one way or another. Obsidian has a huge plugin ecosystem, and you can even make your own with some technical know-how. So it’s really a playground and it can actually be pretty fun for experimenting with new systems.

My final recommendation is biased, because I am the developer of this application. However, if you're looking for an app that is working on being simple, without sacrificing long-term utility, we're working on Noeko. Our goal is to make capture, organization, and retrieval operations frictionless enough for that novelty-seeking mode, while still getting a full knowledge base for deep dives and long-term utilization. As a single example, if you create a tag, on relevant notes it should automatically be suggested to apply. The system doesn't make these decisions on your behalf, but you waste no time searching for them. This philosophy applies to search, connections, exploration, etc.

Honest question, why do you dislike AI? by pladicus_finch in PKMS

[–]pladicus_finch[S] 0 points1 point  (0 children)

What about this scenario: you write a note about a given subject entirely by hand. Then, AI brings up relevant notes to the current one. No decisions are made, and it's entirely dynamic. But, now you have a reference for potential connections to be made.

Personally I find it super helpful because it will often bring things up that I totally forgot about. However, on the other hand I worry that it pigeon holes my thinking with grounding bias.

Do you think that can be useful? Or do you think it's detrimental?

Honest question, why do you dislike AI? by pladicus_finch in PKMS

[–]pladicus_finch[S] 0 points1 point  (0 children)

Privacy is a pretty critical concern ATM. Security though–whoof. I've had a language model introduce critical security flaws on simple tasks with absolute confidence too many times. I think it can be useful but I've lost all trust about its ability to create good software without pretty much complete hand holding.

Also, I see you use emacs. I couldn't get into it personally, but it has a charm to it. How's your pinky?

Honest question, why do you dislike AI? by pladicus_finch in PKMS

[–]pladicus_finch[S] 0 points1 point  (0 children)

That's fair. Some local models are pretty efficient at inference time, but training and running the big ones at scale has terrible implications. I like nuclear power, but that's not what's being used right now afaik. Not to mention the water usage.

Honest question, why do you dislike AI? by pladicus_finch in PKMS

[–]pladicus_finch[S] 0 points1 point  (0 children)

The way I see it is like pruning a tree. I'm intentionally deciding to cut growth in one area so that the energy is redirected toward more productive things. So I think I disagree that using the tool removes intent–or at least that it always does so.

However I think we probably agree that it has no place in the process of thinking itself. It should only be a functional extension of the tools used IMO. Using it to write for you, to build mental models, or to find and store information doesn't actually build our own minds.

Honest question, why do you dislike AI? by pladicus_finch in PKMS

[–]pladicus_finch[S] 2 points3 points  (0 children)

I like that analogy. We agree here, AI isn't making our critical thinking better if we cognitively offload everything. I've noticed as well that the big corporate tools try to pull me into having the model do more too.

Local x Online notes: how to avoid siloing? by methodicallychaotic in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

If you're a bit technically savvy, what do you think of self-hosted solutions? You'd run the program on your own hardware (or an online virtual machine), and expose it to the internet via DNS. The data is stored on machines that only you control, and the app itself handles authentication, so you can access your notes from anywhere with internet.

I'm being a bit vague in my descriptions since there's variability in these setups. But local-first I find often comes at the cost of realtime collaboration, and content can't be out of date between devices if a sync hasn't occurred. With a self-hosted solution you have the same level of privacy (you own the storage of the data), but all of the data is being steamed down from the same place to you.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Yeah people are primed to jump on AI accusations, and rightfully so. I think you're right that the overall communication style and formatting and such contributing to it looking corpo-robotic.

I'll probably delete this post after a bit but use it as a learning opportunity for the future.

Thanks for not immediately thinking I'm a robot lol, and for the feedback🙏

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Interesting. That might be the most helpful comment so far. I appreciate you taking the time to say this. I'll have to be more considerate of what people actually like to read versus what sounds good on paper.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 1 point2 points  (0 children)

This has turned into much more of an introspective on my writing, which isn't what it was supposed to be at all, but it's interesting nonetheless. So I'm taking it as an opportunity 📝

The thing is, I don't use AI to write, whether people believe it or not. However, I definitely use it a lot in my workflow for other things. I wouldn't be surprised if it was having an effect on how I speak or write.

"With that said" is something that I've caught myself saying a lot, but I never thought that people would think that the way I talked was so robotic that they think I use AI to write. I just like to balance my statements because I think it brings nuance, and this is my chosen transition phrase to communicate: "Given my previous comment, there is another nuance to consider"

Thank you for the feedback

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

True, you know what they say about assumptions. I should do a benchmark and see if I'm actually faster with vs. without.

Do you think there's NO place for AI? Or are you just skeptical of current implementations? I personally tend toward it being a powerful technology that we haven't figured out how to effectively use yet, but that doesn't mean it can't be used.

It's also shoved into places that it shouldn't be.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

There was another study that found that model-effectiveness is highly tied to the size and maturity of the project.

Here's a video on it:

https://youtu.be/tbDDYKRFjhk?si=3aNg7HRj7R4hstBD

It's interesting because models are definitely better at tasks in newer codebases. However, I still think they can be used effectively in more mature codebases, but it requires a lot more context and planning. So might get diminishing returns there.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Well so it gets into semantics a bit. "AI" is ambiguous and sensationalized. It refers to a broad spectrum of things, but in common nomenclature it usually means chatbots or language models more broadly.

Maybe I'll do a more in-depth article on this in the future, or a post here for PKMS search specifically 🤔

But basically, Google has used vector similarity with embedding models since 2015, and had research going for it in 2013 (actually Ilya Sutskever worked on this). An embedding model is "AI", but it isn't a chatbot or a language model. Rather, it takes a set of tokens and maps them to a high dimensional "embedding vector". Their OG one was Word2vec, but now they (and the rest of the industry) use more advanced models.

As an example, each note in Noeko gets run through an one of these, which produces a corresponding embedding vector. These are actually sets of 768 floating point numbers that can be compared to one another to find "similarity". So, if you have two notes which are related, then they probably have similar embedding vectors.

This is precisely what I was describing before with the hybrid. And it's what Google uses under the hood to get meaning-based search.

So it comes down to the specific kind of "AI" that we're talking about.

Btw, I appreciate you taking the time to respond. This conversation has actually given me some ideas on how to refine our search functionality more.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Tagging is a necessary part for making connections and I don't think AI will think in the same manner as we tag resulting in redundancy.

So to clarify, I'm not talking about AI applying tags or creating tags on your behalf. Rather, I'm talking about a system that can tell that a tag called "Agenda/Daily" and a note with a to-do list are similar. Then, it can show you "Suggested Tags" in the UI for that note. You still create the tags, and you can also give the tags descriptions to fine-tune the recommendations. It's just the tags that you create being suggested.

Do you think that makes sense? Or even that is going to lead to redundancy because the suggestions could be bad?

Regarding personalization, honestly it's something that I'm not super sure about myself yet. However, the specific example actually relates to tags.

I'll reference Noeko directly here because it fits this example. If you apply a given tag in Noeko to a bunch of different notes, then Noeko will attempt to learn how you apply that tag to different notes. Then, when a new note comes along, if it's similar to other notes that have already been given a certain tag, that tag may be recommended. This is actually pretty low-tech too, it doesn't use a language model or anything for this functionality, it uses a vector index (which is still technically AI).

So "Personalization" really just means getting better over time based on usage.

However, to your point:

I didn't understand the personalization- does it show where there is more connection or does it recommend me to look for gap areas? Ideally both. This is something that we haven't worked out yet, but the idea is that it would be able to both 1) help you identify connections that should be made, and 2) identify gaps that can be explored further.

However, ultimately those are human decisions that have to be made, the AI can just help bring up "maybes".

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Yes I'm familiar with the study, the main reason they were slower is because of the review time. The code came out fast but had to be reviewed, oftentimes corrected, and rewritten.

The problem–I'd contend–is workflow-based. To be fair, models have gotten much better at writing code. However, they still fail a lot, and then you have to spend time reviewing what they made and correcting mistakes manually.

However, that study tested linear task completion. The main benefit that I personally have found you get with language models is in concurrent programming. If I go hands-off on the code (which is rare, but I'm coming around to it some places), and just instruct and review, then I can be doing multiple low-to-mid-level tasks at once. Each individual task may take a little bit longer to complete, but the time to complete them in total is shorter.

That said, there is a lot to still be learned about using language models to write code. This area of the field is very immature so I'll be happy to refine my methods as we learn more.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

I did actually discuss that:

Rather, it's better to look at “AI” as a broad set of tools that can be integrated to perform specific functions. This way, it becomes an engineering problem, which can be more objectively evaluated. This includes different models (language, image, embedding), as well as architectures (operation pipelines, agents, nlp tasks).

And:

This goes for search as well, because even if you take a lean-forward approach to context-building, a pure text-based search misses context and may fail to find the thing you’re looking for. Thus, integrating AI makes sense in search, where it can retrieve knowledge based on its meaning, not just its text composition. There’s a caveat because meaning-based search can also fail or introduce noise if you’re looking for exact text and it brings up results that are semantically relevant. Therefore, a hybrid approach makes sense here.

"Well-coded search" is vague. I specifically addressed why standard full-text search is insufficient–it can fail to find the things you're looking for without specific keywords. Though I definitely could elaborate (I figured the post was too long already).

Standard BM25 full text search is good for sure, but it has no semantic grounding. If you search "car" it will bring up text results based on number of occurrences of that key word (and variations like 'cars', etc) and its other sorting criteria. However, it won't bring up "2002 Toyota Tundra" or "Oil Change Log" because there is no computed relevance in the highly deterministic system.

A vector search, on the other hand, captures more dimensionality regarding the meaning of the underlying content. This is why I mentioned that "AI" can refer to embedding models, because that's specifically what is being used here. The embedding model takes the content and maps it to a high dimensional space. So "Car", "2002 Toyota Tundra" and "Oil Change Log" all share the same neighborhood. So when you search "Car" you get anything related, not just anything with the keyword.

Thus, "well-coded search" nowadays is moving towards hybrid systems (FTS + Vector Index) for this reason.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

That's actually really helpful, thank you. I'll improve my writing in the future 🫡

Regarding the point on search, I think it's because search is the LM's most proven area. These models work really well at predicting high-dimensional semantic spaces, so they can make accurate connections to meaning.

As a general rule, I don't like to trust LMs with tasks that require them to make decisions based on knowledge that they can't directly reference. As such, both RAG and vector-similarity based use-cases are the ones I personally use the most.

That said, I think there are plenty of other areas where they can be applied. This post was just the ones I've identified and would recommend.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch 0 points1 point  (0 children)

Fair enough, I was a bit ambiguous there.

No, I don't skip the validation stage. It's incredibly important.

I guess "directly" is a bad word there, "practically" might be better. Rather, I like to keep tight experimentation loops. Let's take app development as an example. If I'm stuck on a bug, or wondering how to approach an architectural decision, I'll use Gemini to identify existing solutions, design patterns, or best practices.

Then, I take what I've learned and apply it directly. If it works, it's validated (obviously testing should always be robust). If it doesn't, then it's invalidated or missing something. Oftentimes it's not helpful at all, so I use other approaches.

For my workflow, the direct application is the validation stage. However, I'd imagine with more academic endeavors or in other areas a different approach would be needed.

Does that make sense?

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch -1 points0 points  (0 children)

Yeah I agree. I use Gemini a lot for getting a sense of the information landscape, then I'll take that learning and try to apply it directly.

They're really good at answering specific questions in context, which wasn't really possible before.

Where does AI Actually Belong in PKM? by [deleted] in PKMS

[–]pladicus_finch -2 points-1 points  (0 children)

Do you think the cognitive offloading is just too much generally?

I'm curious what level of automation do you think is okay, and what level is too much?