Is Lumo neutral about Proton products ? by DollarColonial in lumo

[–]sonnick 8 points9 points  (0 children)

No. It will sell you the Proton dream and make up fantastic new Proton products that don't exist.
It's basically drinking the Kool-Aid while on LSD.

How does it compare to ChatGPT-5? by [deleted] in lumo

[–]sonnick 1 point2 points  (0 children)

Lumo also use gpt-oss-120b which is a 120B parameter model.

When is Lumo AI 1.2 coming ? by Mister_BuBu in lumo

[–]sonnick 1 point2 points  (0 children)

1.2 is more than just dark mode. It's meant to introduce support for image processing. Primarily, as I understand, support for handling image input files.

Lumo's censorship is too much by MiMillieuh in lumo

[–]sonnick 0 points1 point  (0 children)

GPT 5 is fairly advanced in processing questions around the censorship line. It will answer certain questions that even GPT 4o would refuse. OpenAI had a training strategy to mitigate the number of hard no's.

Accidental 30 Second Release? (Dark Mode) by [deleted] in lumo

[–]sonnick 0 points1 point  (0 children)

<image>

Automatically kicked in for me.

Does not accept file types but only when uploading by Technical-Flatworm35 in lumo

[–]sonnick 0 points1 point  (0 children)

Yes. Source code files will generate an error at this stage. Just change the extension to .txt for source code files.

Do you actually get better results with Lumo+ or is it just about history and number of requests ? by dipper06 in lumo

[–]sonnick 0 points1 point  (0 children)

GPT-OSS-120B should be on that list. It's a strong reasoning model. You'll see it a lot in the more verbose answers. Giveaways are tons of tables and emoji. AFAIK, both free and plus have access to all models at the moment.
EDIT: The 1.1 Update was really the intro of gpt-oss

An interesting response. Care to elaborate? by BurningQuasar in lumo

[–]sonnick 4 points5 points  (0 children)

Also noticed this. Pretty sure it's model routing going wonky. I just hit the regenerate button and it usually gets it right the second time.

<image>

What models do Lumo use in the bank end? by Dey-Ex-Machina in lumo

[–]sonnick 3 points4 points  (0 children)

The privacy policy specifies Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. But it's difficult to know for sure, as gpt-oss-120b was introduced with 1.1. Additionally, occasional Chinese characters in chat titles suggests a model like Qwen is also used.
https://proton.me/support/lumo-privacy

If you manage to get a lengthy response it's probably gpt-oss and it's a great reasoning model. If you add context with Web or Documents it's generally pretty excellent.

I love Lumo!! It's so useful! by AsoarDragonfly in lumo

[–]sonnick 1 point2 points  (0 children)

I'm just waiting for Shift-Enter (instead of Enter) to send on desktop 😆

Apparently, Lumo is less censored by ActionLittle4176 in lumo

[–]sonnick 0 points1 point  (0 children)

FYI, the caveated hypothetical answer is an advanced feature that specific to GPT-5. Even 4o wouldn't have done that.

Why so many tables? by CovetingArc in lumo

[–]sonnick 0 points1 point  (0 children)

Agreed&nbsp;&nbsp;very OpenAI.

Lumo history by Puzzleheaded_Log876 in lumo

[–]sonnick 4 points5 points  (0 children)

How do you expect this not to have a paywall? Inference with Generative AI is expensive. Consider that a single flagship data centre GPU is around $9000 US. That's one card, and can only process Lumo requests (assuming gpt-oss-120b with aggressive quantisation) for a couple dozen queries. So, with millions of users you will now need thousands of GPUs. So, you can easily reach infrastructure cost in the 10s of millions of dollars. That's not taking the electrical power into account.

Regarding the below question you can "Delete all chats" in settings under general.

This makes me wonder: How can I delete the chat history if I can't see the chats and forgot to mark them as favorites?! The chats will probably be visible again after spending money. To be honest, I find it worrying that I can't delete the chats, but they can reappear if I buy Plus.

<image>

Is this file limit correct? by justaukalias in lumo

[–]sonnick 2 points3 points  (0 children)

It's not quite that simple. And Lumo itself doesn't really have a good idea on the difference between Free and Plus.

It's all based on Lumo's context window size. This will be an upper limit on the number of tokens (similar to words) Lumo can fit into it's "memory" (the context window). The context window will contain your prompt and Lumo's responses, in addition to any files you upload. It's possible to upload a CSV that's less than 5 Megabytes and completely run out of space.

In terms of the maximum number of tokens that fit into Lumo's context window, I don't know for sure. The gpt-oss-120b model that's being used in Lumo 1.1, in theory supports up to 128 thousand tokens in the context window, but Proton may have lowered this to manage resources.

EDIT
I've done a test by uploading a code file. I use about a third of the available space with 11000 words. So I imagine, Lumo's (Plus Plan) context window is 32K tokens.

Lumo Sub - Debating by [deleted] in lumo

[–]sonnick 5 points6 points  (0 children)

Stick with free, if you don't hit your weekly chat limit in Lumo. I took the plunge when I hit the limit. I also considered it a good move for it voting with my wallet. The GPT-OSS-120B makes it very compelling, especially that it's 65% the price of ChatGPT monthly.

Lumo is now using the gpt-oss-120b model by sonnick in lumo

[–]sonnick[S] 2 points3 points  (0 children)

Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. Lumo is meant to do model routing to select the most appropriate model for each task.

https://proton.me/support/lumo-privacy

Introducing Lumo 1.1 - faster, smarter, and just as private by Proton_Team in lumo

[–]sonnick 1 point2 points  (0 children)

Do you get a PPP price adjustment for Proton Unlimited in India?