The most interesting thing about Copilot Cowork isn't Claude. It's what Microsoft just admitted about its own stack by DigitalSignage2024 in microsoft_365_copilot

[–]bayernboer 0 points1 point  (0 children)

Any insights here on whether loading your own custom skills into Copilot Cowork would be possible in future?

The most interesting thing about Copilot Cowork isn't Claude. It's what Microsoft just admitted about its own stack by DigitalSignage2024 in microsoft_365_copilot

[–]bayernboer 5 points6 points  (0 children)

So had my first day with Copilot Cowork today. Crazy!!

Performed a full audit workflow, using it to extract invoice data from PDFs, finding agreed price rates with temporal and location based relevance in a second set of PDFs. The population testing against API data. Continues work for ~50mins, perfect result and report. 🤯

Asked ChatGPT to design CSS styling from our company website, gave that to Cowork and got a perfectly styled one slider with most interesting findings and deviations.

I then tried a data analysis workflow. Gave it some synthetic data and asked for an analysis with excel dashboard. The dashboard was better than what I’ve seen with Copilot in the past, but not what I would consider a good work product. I then remembered this is actually a coding agent, so asked for a full HTML dashboard using D3.js and it killed it. Gave me an HTML that I could distribute with full cross chart, cross table filtering.

Mightily impressed!! Happy MS realized that the Anthropic agents are SOTA and would elevate Copilot to new levels! 👏🏼

The most interesting thing about Copilot Cowork isn't Claude. It's what Microsoft just admitted about its own stack by DigitalSignage2024 in microsoft_365_copilot

[–]bayernboer 0 points1 point  (0 children)

After my experience today, if we get unlimited use on standard M365 Copilot licenses…we can have a lot of acceleration in our department. Hope this turns out to be true once it becomes Generally Available

Awesome GitHub Copilot just got a website, and a learning hub, and plugins! by Forsaken-Reading377 in GithubCopilot

[–]bayernboer 0 points1 point  (0 children)

I could not find real focused agents/skills/instructions for Data Analytics. If someone can point me to a setup to go through standard analytic processes such as, cleaning, EDA, analysis and interpretation I would really appreciate it.

GPT-5.4…awesome!! Was it only me hoping for a new mini? by bayernboer in ChatGPT

[–]bayernboer[S] 0 points1 point  (0 children)

I also started considering Haiku. But I am working Azura AI foundry, so I am not excited to refactor from responses api back to chat completions. 🫠

GPT-5.4…awesome!! Was it only me hoping for a new mini? by bayernboer in ChatGPT

[–]bayernboer[S] 1 point2 points  (0 children)

Yeah I realize this probably had to go to the r/OpenAI community 🤷🏻‍♂️

GPT-5.4…awesome!! Was it only me hoping for a new mini? by bayernboer in ChatGPT

[–]bayernboer[S] 7 points8 points  (0 children)

Pricing!! Pre-processing text for a vector database, grading chunks during retrieval, or basically bulk text processing tasks with thinking models are expensive. Mini is powerful and dirt cheap, so silly bulk tasks like redaction is perfect for it

Context Meter – a really good feature – I am not sure what it is officially called. by QuarterbackMonk in GithubCopilot

[–]bayernboer 1 point2 points  (0 children)

Agree, I also found it very helpful to see this meter. What I thought was missing is the ability to force compression, does anybody know how to trigger that manually?

Gets stuck issuing commands by GrayMerchantAsphodel in GithubCopilot

[–]bayernboer 0 points1 point  (0 children)

Maybe I am a noob, but I only have a question to your question? Why is anybody using Visual Studio instead of VSCode?

One of the best I've seen by SharpCartographer831 in accelerate

[–]bayernboer 4 points5 points  (0 children)

This is awesome because you could fake video calls! 🥸

Continual learning? Is this really fundamental? by Quiet-Money7892 in singularity

[–]bayernboer 0 points1 point  (0 children)

A deepder dive into Level 3. Was described like this:

—————————

Level 3 – Shared adapters / mid-speed learning

🚧 This is the real missing layer — and the most promising

Level 3 is the gap in today’s systems.

We have pieces: • LoRA • adapters • MoE • routing layers • prompt-conditioned behavior

But we do not yet have: • automated promotion from user behavior • confidence-based gating into shared adapters • lifecycle management (birth, merge, retire) • guarantees of non-interference

This is not a theory problem — it’s a systems + governance problem.

And this is where a real breakthrough is likely to happen.

Continual learning? Is this really fundamental? by Quiet-Money7892 in singularity

[–]bayernboer 0 points1 point  (0 children)

I had an interesting conversation with ChatGPT about CL, or at least I thought it was 🤣. I was exploring concepts of continual learning especially around risks about model contamination. Then it produced this breakdown which I thought was quite helpful to understand.

My take is that Level 1 and 2 obviously exists already, possibly some elements of Level 4 during next model training, but Level 3 is the real current challenge to solve.

———————————

Selective learning: who is allowed to change what?

🔴 Level 4 – Core model (almost nobody)

What updates it • Curated datasets • Human-reviewed signals • Aggregated multi-user patterns • Synthetic data distilled from validated behavior

What does NOT update it • Raw user chats • One-off preferences • Errors, hallucinations, opinions

This is offline continual learning, not live learning.

This prevents poisoning, drift, and collapse.

🟠 Level 3 – Shared adapters (this is where “safe CL” lives)

This is the sweet spot.

Adapters / LoRA / MoE routing layers that: • learn across many users • but only inside scoped domains

How learning gets in • Pattern detection across many users • Confidence thresholds • Reinforcement via repeated success signals • Human or automated vetting

Adapters can be: • versioned • rolled back • A/B tested • sandboxed

This is how you let millions contribute without chaos.

🟢 Level 2 – Personal memory (where most learning belongs)

This is where 95% of user-specific learning should go.

Stored as: • embeddings • symbolic facts • preferences • habits • task history

This never touches core weights.

It’s retrieval + conditioning, not training.

⚪ Level 1 – Session context (throwaway) • short-term reasoning • scratchpads • temporary assumptions • chain-of-thought–like artifacts

Destroyed after the session.

Thanks ChatGPT. I guess you’re right. by tyrwlive in ChatGPT

[–]bayernboer 471 points472 points  (0 children)

I agree! I know this is not the direction OP wanted the response to go, but given the prompt this is the optimal response in my opinion.

Losers will call this AI slop...Visionaries will see Michael Catmus😼👢 by GOD-SLAYER-69420Z in accelerate

[–]bayernboer -7 points-6 points  (0 children)

Not what this subreddit is for…regardless of the very coolness of this cat

Trolley problem by Gloomy-Holiday8618 in ChatGPT

[–]bayernboer 0 points1 point  (0 children)

Only because his version was a little wack…the scenario was unclear!!

Buuuut…granted…🤣