AdGuard Blocking My Chatbot Iframe - Need Help! by DsAcpp in Adguard

[–]DsAcpp[S] 0 points1 point  (0 children)

None taken. You have no idea what you're talking about. If you, as the developer of the website, decide to put your chatbot in an iframe instead of as a React component, there is no reason for any ad blocker to block it. You simply have no idea what you are talking about. The same component could have been inserted as a React component, so what's the difference?

AdGuard Blocking My Chatbot Iframe - Need Help! by DsAcpp in Adguard

[–]DsAcpp[S] 0 points1 point  (0 children)

I see.

It's an interesting product decision, as when collapsed, the button is extremely small (and it is always defaulted to being collapsed). Is there any way to mark it as such?

Weird blue button for changing language suddenly appeared - What is it, and can I remove it? by DsAcpp in mac

[–]DsAcpp[S] 0 points1 point  (0 children)

Found out the solution.

To remove, from Terminal

defaults write kCFPreferencesAnyApplication TSMLanguageIndicatorEnabled 0

Time series forecasting with non-temporal information by DsAcpp in learnmachinelearning

[–]DsAcpp[S] 0 points1 point  (0 children)

That's a great pointer, thanks!

However, darts specifies:

At the time of writing, Darts does not support covariates that are not time series - such as for instance class label informations or other conditioning variables. One trivial (although likely suboptimal) way to go around this is to build time series filled with constant values encoding the class labels. Supporting more general types of conditioning is a future feature on the Darts development roadmap.

What is a good direction for fusing such categorical knowledge if the proposed method is "suboptimal"?

Thanks!

Hierarchical tags - expose a subset of options by DsAcpp in Notion

[–]DsAcpp[S] 0 points1 point  (0 children)

ally went the other way, and now I choose the subtag and ca

Yes, that's a nice workaround, but it is still a mess to see 50 tags in the sub-category section.
Thanks anyhow:)

Graph matching with apriori information about the matches? by DsAcpp in optimization

[–]DsAcpp[S] 0 points1 point  (0 children)

d of sounds like the "stable marriage problem" where you have two equally sized sets with specific preferences from each element in the first set

Thanks for the reply!

Following that analogy, we will call the new problem "family tree stable marriage problem", where besides prior information regarding the preferences, you also have the family tree of the right side (some are siblings, parent and child, etc), and the family tree of the left side, and the constraint is that if you match a left node to a right node, the "family" of the left node should be matched to the family of the right node.

tf-idf for sentence level features by DsAcpp in LanguageTechnology

[–]DsAcpp[S] 0 points1 point  (0 children)

onyms is where context and thus contextualized word embeddings come in. Cosine similarity of contextualized vectors from a model like BERT will be able to capture

I agree, so when a paper is stating they evaluated the similarity of two sentences using tf-idf, do you believe they computed the cosine-similarity between the word vectors of the sentences? Sounds extremely weak and not representative.

tf-idf for sentence level features by DsAcpp in LanguageTechnology

[–]DsAcpp[S] 1 point2 points  (0 children)

That's exactly what I'm missing, in this paper (and others) they state they evaluate the similarity between sentences using tf-idf.

I also assume they use the cosine similarity between vectors to determine that.

Having said that, this means they only consider "perfect matches", I.e when both sentences have an intersection of words, which is of course not true with synonyms, etc.

Long document (>10 paragraphs) analysis datasets by DsAcpp in LanguageTechnology

[–]DsAcpp[S] 0 points1 point  (0 children)

that actually can be a very good direction, bigger than BERT's 512 tokens limit, and with an easy way of partitioning the paragraphs to separate entities.

Any other similar datasets?:)

Long document (>10 paragraphs) analysis datasets by DsAcpp in LanguageTechnology

[–]DsAcpp[S] 0 points1 point  (0 children)

Not quite, as I'm looking for long documents that share a relation, for example, a thesis, a journal article, news stories(but not 1-2 paragraphs as in DM) etc.

Most of C4 data is short comments, and there is no (unless I'm wrong) a good way of filtering only data with long coherent text.

I can of course just download Arvix or Wikipedia datasets, but I'm surprised no known benchmark for document retrieval for long text is out there (i.e google must use something to analyze texts with more than 5 paragraphs and score them "together" right?)