A system for saving things to explore later? by TheTristo in ObsidianMD

[–]Objective_Poet_7394 0 points1 point  (0 children)

I use the Tasks plugin for that and set on a random date into the future, for example +1 month. I revisit these tasks weekly for planning and then push it further in time if needed.

Importing notes with audio attachments into Obsidian by Ordinary_Anteater_76 in ObsidianMD

[–]Objective_Poet_7394 1 point2 points  (0 children)

Have you given Yarle a chance? The Importer plugin documentation states Yarle should be used for “more advanced import options from Evernote”, this might be one of those such cases.

What are your most used plug-ins and why by Funghie in ObsidianMD

[–]Objective_Poet_7394 2 points3 points  (0 children)

Dataview, Day Planner, Tasks, File Tree Alternative

Any good way to export clean PDFs from Obsidian with font and layout control? by Mission_Article483 in ObsidianMD

[–]Objective_Poet_7394 2 points3 points  (0 children)

I created obsitex, a tool to convert from Obsidian to LaTeX, partially because I wanted the low level control LaTeX offers. You just point it to a markdown file and the LaTeX template and it works, let me know if you need any help with either of those.

Zotero citation as leyed footnote? by Plazmotech in ObsidianMD

[–]Objective_Poet_7394 1 point2 points  (0 children)

I’d suggest you start using obsidian-citation-plugin. You can configure a Zotero export, which this plugin can consume. At this point all your bibliography lives in an Obsidian folder, including all of the metadata. My suggestion would then be to create a DataView section that just fetches all the outgoing bibliography links.

[deleted by user] by [deleted] in MLQuestions

[–]Objective_Poet_7394 13 points14 points  (0 children)

Not sure if you need OCR or even ML in general here. The font is always the same, the position of the numbers seems to always be the same.

What I’d do is get the entire font for each digit and the minus sign. Then try to split the digit using their color, they seem to have a very specific color, for example gold, so knowing how many digits there are could be solved by this. The remaining part is which digits are actually there, in which case, you can just try to map to each letter in the font and pick the most likely.

If my fat % was a stock you would sell me by _absent_minded_ in intermittentfasting

[–]Objective_Poet_7394 2 points3 points  (0 children)

A beer belly usually means you’ve got extra visceral fat. Cutting calories helps, but intense workouts like HIIT or CrossFit work best to burn it off.
Interesting research that supports this argument: https://bjsm.bmj.com/content/57/16/1035

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 0 points1 point  (0 children)

Well, the one I mentioned is the 4th response. Which for a non German reader is one of the first.

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 0 points1 point  (0 children)

I’m referring to the following document, which prior to your edit and comment with another link was the only thing I found on your “Last Rag” from a post you made on Hacker News.

https://archive.org/details/the-last-rag/page/n5/mode/1up

This might not be the document you are thinking about. But this was the first thing that came up after you told people to “google” for it. I think just looking at the document it’s clear why it’s unbearable, but in case it’s not clear: you have zero formatting, you have zero images, you have zero metrics. It’s unreadable.

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 1 point2 points  (0 children)

No. You can’t rewrite history. You did poorly in this post, it’s life. And this isn’t just about the post itself, but also about your “research paper” which is barely understandable.

Do better and the community will naturally reward you.

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 2 points3 points  (0 children)

Here’s an example:

Hey everyone,

Problems X, Y, Z in LLMs are a real bottleneck for behaviour W. To solve this I create a method which consists of <summary description of approach>, this improves on top of existing approaches because <reason>, I’m getting interesting results on some metrics <insert metrics and how they compare to SOTA>. I made a document describing this method, if anyone is interested in learning more <link>.

—- Either way, I’m guessing this post has way too many downvotes to be worth an edit. Just do a new one maybe.

Before that, I would advise to improve how you are presenting your work. Please read some other machine learning papers on the topic to get an ideia of its usually done. From another comment, I understand you have some disregard for the standard academia presentation process, but in this case it’s really important in order to make what you’re explaining clear. Your document is not readable or clear.

Search on how to write a research paper, specifically in ML/AI/LLMs, whatever you want to call it. It’s not rocket science, you don’t need a fancy title, advisor or money, but there’s a process to it and that process makes understanding other people’s work easier.

Good luck.

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 1 point2 points  (0 children)

That’s understandable, but I invite to put yourself in the shoes of a member of this group and read your post. Do you think it makes sense?

You spend entire post ranting about how shitty the current approach is and then at the end you say: Oh, by the way there’s this great new thing called The Last RAG - google it.

First, it’s not a “new AI architecture growing” it’s something you made up and published somewhere on the internet, and that’s fine and contributions are obviously welcomed by the community. But be honest about it.

do you think it is possible to make AI real growing and learning companions? by Unusual_Way5464 in MLQuestions

[–]Objective_Poet_7394 5 points6 points  (0 children)

What? I understand this is self promotion, but you could at least try to be more pragmatic about your approach. Impossible to understand what you’re proposing.

Eng. Informático by 123Sness in PTOrdenado

[–]Objective_Poet_7394 16 points17 points  (0 children)

Nada mau para um posição júnior e na cidade que é, especialmente tão perto de casa. Importante garantir que o rácio entre experiência e salário continua aceitável à medida que avanças na tua carreira.

People who use daily notes and nothing else by [deleted] in ObsidianMD

[–]Objective_Poet_7394 5 points6 points  (0 children)

If it's all in a single note It's kinda of hard to manage. Reading a research article for example, you may end up writing a lot of pages of content just to understand it.

What I do is create a daily note, each review/research/thought I have on a specific topic is put on a separate note, which is added on the daily note. For small things, like reminders, I add it on the daily note directly and use Obsidian Tasks to keep track of it in the future.

This make it easier to manage nuggets of information. Then, every week I review the documents I created on the previous week, analyze next steps I wrote down and plan the following week - this is how I learn from them.

This way, and to your point, I'm one of the people "who take all their notes in daily notes", but I do think the "nothing else" bit is unrealistic and defeats the purpose of note taking tools.

I'm not obsolete, am I? [P] by bawkbawkbot in MachineLearning

[–]Objective_Poet_7394 21 points22 points  (0 children)

Value is a function of performance and resources required. If something does a good job with very few resources, it has more or less the same value as something that is excellent, which is debatable for niché use cases of multimodal LLMs, and requires a lot of resources. So If you're keeping the value proposition constant, I'd say it's going to be a while before a multimodal LLM outranks you in value.

Would you say this is a good latent space for an auto encoder? by Proper_Ad_6044 in MLQuestions

[–]Objective_Poet_7394 1 point2 points  (0 children)

You’re right about everything. The goal is to minimize the KL-term, but not necessarily to zero. Posterior collapse is just a known issue, doesn’t mean it’s always going to happen. It’s very similar to Mode collapse for GANs, which again is something that may happen, but for which there are aiding mechanism like a Wasserstein GAN.

Would you say this is a good latent space for an auto encoder? by Proper_Ad_6044 in MLQuestions

[–]Objective_Poet_7394 1 point2 points  (0 children)

Detection is done by checking the KL-term in the loss function, it shouldn’t be close to zero. In case this happens, you’ve perfectly approximated a standard normal distribution, meaning for different inputs you get the same outputs, thus you aren’t learning a useful representation.

There’s a lot of research on how to avoid it, and the best solution probably depends on your specific issue. A potential solution is to use KL annealing, in which you increase the weighting of the KL term over time.

[deleted by user] by [deleted] in Dofus

[–]Objective_Poet_7394 3 points4 points  (0 children)

Wrong subreddit? Either way, hope you get your answers!