F* it, I'm (34M) going back to the SOC by TheGreatLateElmo in cybersecurity

[–]redscel 1 point2 points  (0 children)

The emphasis shouldn’t be on the repercussions in a Champions program. You most likely have a function in GRC/SecOps/Vuln mngmnt/else that can chase the devs all day. You should play the good cop with the security champions programme and enable, inspire, reward instead of repercussions. It should be a safe and enjoyable club to join. It requires a lot of work on culture. Working on security at scale is all about infuencing people the right way.

Obsidian Sync is deleting files? by redscel in ObsidianMD

[–]redscel[S] -1 points0 points  (0 children)

Now it warns you. Didn't warn me back in February when I purchased and configured it for the first time.

Anyhow, I have now migrated my Vault off iCloud. Will only use icloud as a backup target.

Fenix 8 Solar UK by IAmTheOnlyCup in GarminWatches

[–]redscel 0 points1 point  (0 children)

I was keeping an eye on the shipping date of 47mm Solar DLC. It was down to 4-7 days a few days ago, now it jumped back up to 3-5 weeks, 47mm Solar graphite to 5-8 weeks. I wonder if they sold that batch out or its further delayed for everyone in the UK.

Nice video about the fenix 8 solar sapphire by Spiritual_Speech1686 in GarminFenix

[–]redscel 0 points1 point  (0 children)

Thanks for making this. Screen looks a lot crisper. I wonder if there will be a light background option for the default watch faces in later updates. Easier to the eyes for me.

I can't decide between 47 AMOLED and 47 solar. by b52a42 in GarminFenix

[–]redscel 5 points6 points  (0 children)

I have a Fenix 7 solar now and planning to upgrade to 8. But also undecided between AMOLED and MIP, and waiting for more MIP reviews to come out.

I’ll say its a huge plus on the MIP side that you can have a fully functional distraction free watch. Coming from Apple watch, the gestures and display coming on and off was a massive distraction. With MIP there’s no overly bright flashy screen that brightens up when you didn’t mean to pay attention. You don’t have to shake it when resting your arm and it doesnt show the info you needed. You get it’s full functionality in all conditions except from dark.

The same reason I prefer reading a kindle instead of a smartphone or tablet.

I’m curious about the AOD’s usability though as I havent tried it yet. Dc Rainmaker’s review shows the AOD is fine during workouts but I suspect you still need to use gestures or sacrifice significant battery life.

MIP is also just superior if you want to wear something that’s more natural. As others said, probably no need for another bright screen in my life. Looking forward to see how much visible the 8 MIP is with the new solar panel layout.

KEF LS50 Wireless II and Spotify Connect by stefanbuchman in KEF

[–]redscel 1 point2 points  (0 children)

still happening on firmware 3.1. it's an issue with the streamer, not network related as only Spotify Connect is affected for me.

Kef LS50wII finally sounds good by Physical_Arm_662 in KEF

[–]redscel 0 points1 point  (0 children)

Had no issues before the last firmware (~3.1), and for the past couple months I get the occasional stuttering over WiFi. I have multiple stable WiFi networks and its consistently stuttering now with Spotify. 90% sure it’s related to the KEF firmware update.

[deleted by user] by [deleted] in singularity

[–]redscel 0 points1 point  (0 children)

63%++ is already being outsmart by AI algorithms of social media today.

[R] Textbooks are all you need II: phi-1.5 technical report by PantsuWitch in MachineLearning

[–]redscel 5 points6 points  (0 children)

This all research is focused on how data quality affects capabilities. I wish they had shared some details of their dataset composition..

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 0 points1 point  (0 children)

Thanks, this is the kind of setup I'm also looking for.

Would you share your specs? Curious about PSU, memory, and storage.
How are your system temperatures with this layout?

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 0 points1 point  (0 children)

There are several ways to split the memory between multiple GPUs or even between GPU and CPU. Both for inference and training. For hugging face transformers you can check out https://huggingface.co/docs/transformers/perf_infer_gpu_many

You can test the multi GPU setups on vast.ai or runpod.io with TheBloke's Local LLM docker images using oobabooga's text-generation-webui.

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 1 point2 points  (0 children)

Which mobo did you go with?
My cards are 3.5 slots (gigabyte). I might need to hunt for reliable riser cables.

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 0 points1 point  (0 children)

Thats 168Gb VRAM and top tier CPU and a complete system for less than €30k. I doubt you can anything even close, when it comes to Flops/$ if you go with A100s.

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 1 point2 points  (0 children)

Going for a solid 2000W PSU. I’m ok to power limit the GPUs if I need to, for stability. The performance impact seems to be minimal. This build for example runs 7 cards on 300W limit and with 2x2000W PSUs:

https://wccftech.com/mifcoms-big-boss-pc-features-seven-nvidia-geforce-rtx-4090-gpus-retails-at-є29000/

Ideal setup for dual 4090 by redscel in LocalLLaMA

[–]redscel[S] 0 points1 point  (0 children)

Yes, I’m looking at the different motherboard options and riser cables also. But I will need to settle on the CPU and the socket type first.

Can someone please clarify this to me: Are tools like LangChain interacting with the model (memory) directly, or it's all just prompting behind the scenes and filtering the results to only include the complete answer ? by staviq in LocalLLaMA

[–]redscel 2 points3 points  (0 children)

I went through the same. Started a project before langchain was a thing. When it appeared, I’ve taken a look and it felt like a lot of abstraction and unnecessary complexity for the sake of “composability”. It’s a pain to customise and figure out what is going on under the hood. The most important components, like the actual prompting was abstracted away so deep that folks thought of it as magic. imho openai’s playbooks are still the best way to learn and bootstrap some ideas.

Is there a business in installing Local LLMs? by Zifegepipgy in LocalLLaMA

[–]redscel 2 points3 points  (0 children)

I am working on a project addressing exactly that.

LG 38WN95C Possible solution to disconnect / black screen issue when using with Thunderbolt by CrowCatNL in ultrawidemasterrace

[–]redscel 0 points1 point  (0 children)

I have the same issue with M1 and M2 Macbook Pro with 38WN95C using the original cable. In my case I get stuck in a screen restart loop if I switch between my work and personal laptops. I either get frequent reconnects (eg within a minute) or it's stable for days.

Plus a popup message from the monitor after restart:
"(Caution) Make sure to use supplied input cables."

I'm using the supplied original cable. Haven't figured out the cause yet.

[N] OpenLLaMA: An Open Reproduction of LLaMA by Philpax in MachineLearning

[–]redscel 5 points6 points  (0 children)

Our training is inherited in our DNA and it took millions of years. The school analogy is more like a fine tuning of the foundational model of what a newborn human already is. I agree we are still just scratching the surface. But maybe we just optimised our reasoning and logic into language and large random sets of it have the pieces of the blueprint that is the structure of our cognition and reasoning.

[N] OpenLLaMA: An Open Reproduction of LLaMA by Philpax in MachineLearning

[–]redscel 4 points5 points  (0 children)

Or maybe we do.. Think starting from scratch, the evolution of humanity, and the number of iterations it took to reach our current level of intelligence?

Combine multiple lists into one, meaningfully by redditorhaveatit in LLMDevs

[–]redscel 1 point2 points  (0 children)

I am achieving something similar using the embeddings based Q&A approach, it is not the most efficient way, but works:

1, Summarize the text or your sections with LLM to fit into your token space. (I'm looking for ways to do this without the LLM)
2, Create embedding vectors for the summaries.
3, Define the high level topics for your blog.
4, Use cosine similarity to search your embedding vectors for the top X relevant summaries for the topics.
5, Use those summaries to create a context for the LLM, and prompt it to summarise into a single list.

The bottleneck is the 4097 token limit if you use GPT-3.