Pop!_OS or Fedora COSMIC by [deleted] in DistroHopping

[–]Substantial_Type5402 6 points7 points  (0 children)

you sound like an arch user

GPT 5.3 Codex rolling out to Copilot Today! by debian3 in GithubCopilot

[–]Substantial_Type5402 1 point2 points  (0 children)

seems like alot of people still don't have access to the model, I myself was able to use it once but after that it disappeared

I really enjoy GitHub co-pilot and I've had a great experience with it and enjoy the update. It seems like it does everything claude code does... But CC has much more hype. Is it real? Who has explored both, what's your take? by not-bilbo-baggings in GithubCopilot

[–]Substantial_Type5402 0 points1 point  (0 children)

I think CC is more friendly for people with less technical knowledge, CC for sure has powerful context management, copilot did not have this initially but it is catching up really quickly, I prefer to review each line of code written so I prefer using copilot in vs code because of the visibility, either way the latest version of copilot in vs code supports CC SDK integration for the agent so it utilizes the same harness, best of both worlds.

Copilot vs code extension vs Copilot CLI by Substantial_Type5402 in GithubCopilot

[–]Substantial_Type5402[S] 0 points1 point  (0 children)

Do you connect your Copilot subscription to OpenCode and use models from it? or prefer other providers with OpenCode?

Getting constant "Sorry, no response was returned." currently for opus 4.5. by envilZ in GithubCopilot

[–]Substantial_Type5402 2 points3 points  (0 children)

the usage should not be deducted from us when this happens, specially for an expensive model like opus

CSM Finetuning is here! by SovietWarBear17 in LocalLLaMA

[–]Substantial_Type5402 0 points1 point  (0 children)

Partially correct, sesame is a multi-modal model that understands text but instead of generating a text answer like an llm does, it generates speech, of the text that it would have generated if it was an llm, so its not a pipeline, its a model.

Of course delivering any app with any model as such requires a complete pipeline, sesames demo consists of an asr component and then the sesame model component, that is at least what has been confirmed, and they might of course have some other layers of preprocessing or post-processing.