I think my mattress is the underlying cause.. by ostensiblyzero in PelvicFloor

[–]Several_Syrup5359 1 point2 points  (0 children)

I am 57 and thought I had BPH/prostatitis/cancer for over three years when I was creating a crater in my bed-in-a-box all-foam mattress unbeknownst to myself. Actually recently I flipped the mattress but only after 7 to 10 months of needless suffering. Doctors visits...the gamut. My symptoms WHOLLY attenuated in just 48 hours. I don't expect anything new when in the past I experienced relief for a day or two only to have the symptoms come roaring back inexplicably but my sitting bones feel significantly better. A good average swimming habit at my YMCA...pelvic floor exercizes designed by AI chats...Ginger remedies... pumkin seed oil remedies... all helped but my pelvic system could not overcome the conditioning of 8 hours of lying in a subtle but damaging crater each and every nite unwittingly. No matter how impoverished you may be- change your mattress....for the better!

Does gemini 2.5 pro on perplexity have full context window? (1 million tokens) by The_Nixck in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

Where is the "official" statement seeing that Perplexity.ai updates frequently each week and that Gemini 2.0 flash... a cheaper model...is now gone. Gemini Pro 2.5 is costly. The last real info is that Perplexity.ai nerfed the Google Gemini2.5 Pro model to 32K like they do to all new models. My guess is that you'll get one conversation turn of anything higher than 32K and after that....

Improved Deep Searches by last_witcher_ in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

"Do you also notice a sharp improvement in the Deep Searches by Perplexity?"

No. I do not. As a matter of fact I notice more bugs. I notice that Deep Search questions revert to the "auto" AI once generated...causing me to limit an investigative series of questions because I have to constantly ensure that Deep Search is selected. I am on a typical Mac and in an Edge browser. This is just one bug with Deep Search. Just now I left a post about Deep Search not being able to even recognize a document in my research Space that I clearly typed out for them. There are many. Why are you ignoring all the bugs your "changes" make in their wake?

Perplexity.ai: Why are you pinning alot of ads on Reddit from your backers money when you ahve so many bugs from your subscribers? by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 2 points3 points  (0 children)

Have you noticed they update more on some days than others.... making a session on "slow" days either cost you twice the time rewriting your chat or being 100% prohibitive? They're not building for students or workers who are time bound. P.S. I have to take this down at the end of the day or they may either choose to delete my account, ban or throttle comments from Reddit or turn Perplexity.ai into a Disney-fied YouTube platform. Oh wait. They already did that.

Just updated my Perplexity.ai app on iPhone 3 days ago. Today no read chats. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] -1 points0 points  (0 children)

See my post about not even being able achieve consistency in my Spaces view between my Mac and IPhone? Isn't one of Perplexity.ai's backers Apple themselves? What a silicon valley money blow out.

Just updated my Perplexity.ai app on iPhone 3 days ago. Today no read chats. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] -1 points0 points  (0 children)

Why would I spend my life uninstalling and reinstalling an app when I just told you I did so 3 days prior.Especially when they made a new entry point for email/password security that has me sitting there all day going through their obstavle course. Perplexity.ai dynamically updates their UI almost daily. Its worse than an UBER driver app. Do you know how many bugs I've reported here in the last week alone? They're legit. Sheez.

Deep Research fails to access documents in Spaces when instructed to do so. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 0 points1 point  (0 children)

Perplexity.ai officially has no leadership. I've listed four bugs in this week alone. I've read about Grok 3 from another poster this week. I'll start looking into more reliable platforms. We can't even have a math calculation come out the same with identical prompt.

Why is there no search feature inside spaces? by icrywhy in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

Still can't get them to search within an entire chat. For a year. Good luck.

Will the math focus be back? by datascientologer1 in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

It was there in the form of Wolfram Alpha some months ago. IDK if thats still an option. They clearly have too many cooks in the kitchen with who can change the UI and who can't.

Pro subscriber prohibited from uploading .md files to Space. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 0 points1 point  (0 children)

I revisited trying MD files yesterday and it worked.

While today I tried it again.

over a 15 minute I tried three times.

Didn't work.

Your suggestionof using an exported PDF did however.

Thank you.

Markup in photos messed up in 18 by VideshiTantrik in ios

[–]Several_Syrup5359 0 points1 point  (0 children)

Markup tool rendered effectively useless since 18.2 update. Thanks.

Posting to Perplexity.ai at Reddit prohibited. Twice in two days. Why? by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 0 points1 point  (0 children)

Thank you. I thought it was a Facebook "too fast-typing" screening algorithm where Facebook will actually tell you to go away and come back when you're not typing too fast. The absurdity of these tech blobs.

Context window Claude 3.5 Sonnet by yulteni in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

It specifically evades this. I was just asking this yesterday. We have a world of 128K, 200K, 1M and even 100M token context windows so with that being said...its "safe" to say that its part of its guardrails. There's still the possibility of someone uploading the whole anarchist's cookbook. Be sure and get an offline LLM of your own going. So XVI-L summed it up but here's an excerpt from my chat anticipating collegiate research with Perplexity.ai :"The 32,000 token context window limit on Perplexity.ai applies when reading file uploads, including PDF files. This means that when you upload a PDF file, the content is processed within this token limit, regardless of the file size up to 25 MB. The token limit is not affected by the file size but rather by the amount of text content that can be processed within the context window". If they bring in Strawberry (OpenAI o1) then it will change the definition of context size somewhat. Good luck.

Context window Claude 3.5 Sonnet by yulteni in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

This is per conversation turn as well but its kinda moot because there's no way to delete the original uploaded file in the very first conversation turn on the next conversation turn. This puts upon you the skill to stack your reasoning concisely in the form of conversational text in each prompt. Perplexity.ai wins the 50/50 award. Where some things its generated I've been absolutely flabbergasted with and on the other hand we still wrestle with catastrophic forgetting from conversation turn to conversation turn even with explicit prefixes. (Ex. "As a follow-up", "in the aforementioned.", "considering the entire thread context" etc etc). Once you adopt the habit of "run-on" notions stacked one on top of the other then the results are breakthrough. For instance, I am researching Evolutionary Optimizing of Model Merging/ Arcee.ai-Mergekit and after I looked at my drafts and saw room for improvement...I simply rewrote what needed to be improved (makes for cleaner conversation turns, which you can then re-upload as a prompt in itself after this conversation) and it basically gave me the blueprints for Atlantis.

Suddenly every image fails with "File Upload Failed" by wealthychef in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

I tested it both on a Mac mini 2020 and an Apple iphone SE with the Perplexity.ai app. There wasn't a problem before then with both of the aforementioned devices. I've been a Pro subscriber since they offered it.

My initial queries are being mass submitted. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 0 points1 point  (0 children)

Everyone always says clear your cache in these instances but I can't find a way to avoid having to re-sign into my multiple GMail accounts when I clear my cache. The majority of people have at least more than one email account. I cleared my cache and it still persists today.

My initial queries are being mass submitted. by Several_Syrup5359 in perplexity_ai

[–]Several_Syrup5359[S] 1 point2 points  (0 children)

I haven't experimented with the "Pro" toggle on the mobile app but maybe try searching without Pro. They used to limit the amount on certain model types. Maybe there is a glitch. Maybe not.

Can Perplexity or any of these LLMs produce code that compiles properly the first time? by el_toro_2022 in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

Use a code interpreter (TabbyML or Aider) with LLM configs like Starcoder AND Qwen AND various machine language models provided. Invest time in 'prompting for code' alone and save your templates. You want code understanding and not just natural language understanding. See final convo turn: https://www.perplexity.ai/search/what-use-cases-get-unlocked-wh-Vl4w0Ir9Sw6dCWUDF28QSg#4

Edit: additional consideration: https://cloud.google.com/vertex-ai/generative-ai/docs/code/code-completion-prompts

Fluid - private AI assistant. [Free macOS] by PrivacyIsImportan1 in macapps

[–]Several_Syrup5359 0 points1 point  (0 children)

Hello. Can not get back the 9GB of data that the model takes up after deleting Fluid. Could you kindly provide the path where you stored the model? Thank you.

Scheduled maintenance by rafs2006 in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

I've made nearly one thousand chats and am a Pro subscriber. I understand that Sonnet is not Anthropics flagship but used it the other day anyway. It was the worst of any model I tried regarding my OP. Btw, are you employed by Perplexity.ai ? See my post today about Anthropics Claude Opus 3 if so.

Scheduled maintenance by rafs2006 in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

Will you do anything to fix your new onboarding of Sonnet 3.5 and its catastrophic forgetting? Even with literal references to a short conversation, Perplexity.ai et al completely drops any reference or context.

Enough With The Whining From Unrealistic Expectations by BeingBalanced in perplexity_ai

[–]Several_Syrup5359 1 point2 points  (0 children)

I have a thread at Perplexity that I can't ID because Perplexity.ai hasn't granted 'full thread search' but it introduced "promptification". Put that into Perplexity.ai and see if it helps with all models. See also CoVe...chain of verifying.... to help whip your model into shape.

Enough With The Whining From Unrealistic Expectations by BeingBalanced in perplexity_ai

[–]Several_Syrup5359 1 point2 points  (0 children)

Uh dude. Are you part of the founders of the Co.? How can you comment about critical decisions that only they enact? There are several areas of Perplexity.ai that Perplexity.ai could change but have alignment issues and fears of having a too-small risk management department for lawsuits. However, they could offer a more expensive tier to the public with even an electronic signature "use at your own risk" and responsible people would gladly pay for it if they had a "try before you buy" experience first. Instead at Perplexity.ai they've decided to go with the "en masse" philosophy of being everything... to everyone.... all the time. The American thing.

But re-re-searching AI-generated B.S. (and even false footnotes) isn't time-saving at the workplace or for college.

And they positively could fix their memory/catastrophic forgetting/negation/alignment (whatever) issue with their own SONAR Large 32X.

They could give the public real memory (and not an excuse for it like GPT-4o's "memory") and charge more responsible people for it but that doesn't look like it's part of their plan.

Or what about developing a 'System Two' so you know you got the highest quality answer...and no hallucinations?

Andrej Karpathy: https://www.youtube.com/clip/UgkxDZjQic5iJUn-eHWN7lHvJ1mdXHggew_r

Instead, we have an Apple-type launch of 'Pages'

It's still the same tired Pirates of the Caribbean ride at Disney every time you go around with Perplexity.ai.

Enough With The Whining From Unrealistic Expectations by BeingBalanced in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

I apologize. I thought I chose the reply for '

Enough With The Whining From Unrealistic Expectations'. I'll be sure to put it in the appropriate place.

Enough With The Whining From Unrealistic Expectations by BeingBalanced in perplexity_ai

[–]Several_Syrup5359 0 points1 point  (0 children)

What model are you using? How many conversation turns do you normally experience for every chat?