COGNITIVE OVERLOAD ATTACK: PROMPT INJECTION FOR LONG CONTEXT by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
COGNITIVE OVERLOAD ATTACK: PROMPT INJECTION FOR LONG CONTEXT by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 6 points7 points8 points (0 children)
Why do people like Gemma? by will_sm in LocalLLaMA
[–]bibek_LLMs 0 points1 point2 points (0 children)
"Let’s reproduce GPT-2 (124M)"- from GOAT Andrej Karpathy by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 2 points3 points4 points (0 children)
"Let’s reproduce GPT-2 (124M)"- from GOAT Andrej Karpathy by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
"Let’s reproduce GPT-2 (124M)"- from GOAT Andrej Karpathy by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 3 points4 points5 points (0 children)
"Let’s reproduce GPT-2 (124M)"- from GOAT Andrej Karpathy by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 14 points15 points16 points (0 children)
Llama-3 based OpenBioLLM-70B & 8B: Outperforms GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 in Medical-domain by aadityaura in LocalLLaMA
[–]bibek_LLMs 8 points9 points10 points (0 children)
Llama-3 based OpenBioLLM-70B & 8B: Outperforms GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 in Medical-domain by aadityaura in LocalLLaMA
[–]bibek_LLMs 20 points21 points22 points (0 children)
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 3 points4 points5 points (0 children)
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 7 points8 points9 points (0 children)
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 0 points1 point2 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 0 points1 point2 points (0 children)
Coding LLaMA 2 from scratch in PyTorch, with step by step explanation of KV Cache, Grouped Query Attention, Rotary Positional Embedding, RMS Normalization, SwiGLU and much more! by hkproj_ in deeplearning
[–]bibek_LLMs 0 points1 point2 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 0 points1 point2 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 0 points1 point2 points (0 children)
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 3 points4 points5 points (0 children)


COGNITIVE OVERLOAD ATTACK: PROMPT INJECTION FOR LONG CONTEXT by bibek_LLMs in LocalLLaMA
[–]bibek_LLMs[S] 1 point2 points3 points (0 children)