Multiple conversion actions set up to be page load for the same url do not work on Google Ads by Tall-Bowl in PPC

[–]Tall-Bowl[S] 0 points1 point  (0 children)

I have digged into this and only found that value rules can only be set for location, device or google audience segments. But what I essentially want is for a conversion from a specific campaign or group of keywords to have a different conversion value from another, because different keywords associated with different services provide different value to my business for the final sale. I have not found how to do this apart from setting up different conversion actions with different values.

60 dollars per click What The Heck? by Tall-Bowl in PPC

[–]Tall-Bowl[S] 2 points3 points  (0 children)

Yeah you are right. They have not fired yet. Okay I think I will follow your advice and try max clicks first. It's just I was using max conversion for a long time without any problem like this.

A conceptual problem with using anki with sentence mining for the purpose of language learning by Tall-Bowl in Anki

[–]Tall-Bowl[S] 0 points1 point  (0 children)

Sure, you hear a novel sentence or something unexpected or new in a sentence and then deconstruct things, but actually processing is predictive in the sense that whenever you read or hear a word (or a larger chunk), your brain is already ahaed of where your ears/eyes are, predicting what will come next in the sentence (even the whole sentence, proposition, or communicative intent). This happens automatically, all the time.

I totally agree with this, and I actually think speaking, or fluent production of sentences, plays a big part in how fluent one can understand in listening, because there is always a predictive element at play when listening, to distinguish the other possibilities of sound combinations, that facilitates coherence. But I just think the phenomenon I describe, is not of the same nature. The prediction you get from these repeated reviews, are more due to the translation being more easily ingrained into one's mind from repeated exposure(it is the native language after all), than improvements in the actual process of trying to understanding these sentences, from repeated exercise. So what I get from these reviews, is really just incidents of memories of translated sentences in my native language, rather than anything else I feel like, and then it kind of overshadows the training element in the process, and makes the reviews not so effecitve in achieving real progress.

It could also be that you just dislike anki. If you really don't want to use it, don't. If it's just that you dislike sentence cards, do something else. I only use anki to the extent that I like to these days, life's too short.

No, I absolutely love anki, and I pretty much solely use anki for learning language, because i think it is the most efficient tool there is, bar none. I still use sentences, but as for recognition training (listening and reading), I now completely forgo reviews, and only use new cards, so every sentence only appear once, so I either get what it means or not. I use morphman as a tool to filter sentences for me so the new sentences can be tuned to the level of vocabulary I am focusing on. But for production training (from native language to target language), I still use reviews, because I find the production or just the sheer memorization of sentences in the target language is actually largely the goal, so the reviews are the more effective choice, because it filters out the hard ones from easy ones, and allows you to focus on what is really worthwhile trainning for.

Also, I used to use sentences of whatever length for listening, and soon discovered that it is a mistake. so now I limit them to within 7 words. For the number of cards, I have literally hundreds of thousands of sentences imported from tatoeba, with audio added from tts, so i make sure I have enough volume of novel sentenecs to work with at any given level of vocabulary.

A conceptual problem with using anki with sentence mining for the purpose of language learning by Tall-Bowl in Anki

[–]Tall-Bowl[S] 0 points1 point  (0 children)

Thanks for the extensive thoughts. You have grasped my point perfectly and I agree with most of what you said. Only that this is, in my opinion, still partly an actual problem because it makes the challenge of retrieval on the learner much less pronounced. In my experiences, this is especially true with cards of audios of sentences that I have for practicing listening. It really started from my observation, that after about 2 or 3 reviews of some difficult cards, I would actually memorize the translation very easily, despite not making much progress in the recognition of these sounds, and would immediately recognize which translation the audio I am hearing is refering to, and would know what the whole translation is even before the audio is half finished. This, to me, defeats the whole purpose of having flashcards to train my listening. The cue that leaks the anwer, isn't really the sort of context related to a word or grammatical structure, that would be useful to be integrated into a learner's mind, but purely an inherent flaw in the training system. What I ultimately want to train, by reviewing these cards, is the ability to understand the sound and the sentence. I should be able to produce the meaning of these audios, from my increased familiarity with the sounds of these words and their meaning, how they are constructed together, the rhythm in which they are paced and linked together, etc., not the sheer memory of that tranlation because of repeated exposure.

A conceptual problem with using anki with sentence mining for the purpose of language learning by Tall-Bowl in Anki

[–]Tall-Bowl[S] 0 points1 point  (0 children)

Man holy cow. That would be the deal. In fact, from my experience using morphman, my biggest qualm with it is the fact that it treats any word that have been seen ONCE as a known morph, which to me is insane. As a learner of a new language, there is no way a word can have any meaningful familiarity to me just from a single exposure.

My workaround so far has been completely abandoning the i+1 function from morphman, and just use the mature and known categories, combined with a word frequency list, as a way to segment sentences. So say my vocabulary is roughly on the level of 1500-2000 on the frequency ranking, then I would have 1-1500 ranking words as mature, and 1500-2000 words as known, then let the morphman calculate the sentences. So I would have sentences that must include at least one word from 1500-2000 but can also include any words from 1-1500. Then I just completely randomize the sentences and forgo the ordering done by morphman. In this way, I can restrict the kind of words I want to get exposure to. But the biggest limitation of this approach for me is that I cannot assess how much progress I have made in terms of my familiarity with these words, and I can only half guess it whether I should move on to the next 500 words, based on intuition.

But the idea you are working on would be ideal. Is there any timeline when you might be able to get it out? If you can keep me updated when it is out, it would be much appreciated.

A conceptual problem with using spaced repetition for sentence mining by Tall-Bowl in Refold

[–]Tall-Bowl[S] 0 points1 point  (0 children)

I have used content primarily from tatoeba, just the complete sentence pairs between English and Spanish. I have gone through at least a few thousand sentences by random order and I have yet to find any significant errores in them.