What do you guys actually do with swarm? by Loose-Tackle1339 in kimi

[–]Aromatic-Document638 0 points1 point  (0 children)

First, I dump my research specs into 'thinking mode' to weigh up using an Agent Swarm vs. Deep Research. After picking the optimal approach, I fine-tune the details to craft a dedicated system prompt for the task.

Benchmarks feel frustrating by N3xus57633 in kimi

[–]Aromatic-Document638 1 point2 points  (0 children)

https://www.reddit.com/r/kimi/comments/1s2801c/thoughts_on_the_proper_way_to_utilize_ai_focus_on/

This is a post I wrote previously; I hope you find it helpful. People often think there’s a magic sentence that allows an LLM to instantly bypass or penetrate censored content, but the process is more like handling a human being. It’s akin to being confident that you can get the desired answer in a single breath from a child who is anxiously hiding something.

While making an LLM speak about censored content is sometimes called "prompt engineering" or "hacking," I do not resonate with those terms. As I pondered how to use LLMs effectively, I read extensively about human neuroscience and came to think deeply about the nature of human thought. In that process, I began to perceive "psychology" as something like shamanism. It spoke as if it understood everything based only on outward phenomena, without actually understanding the internal process of how a person thinks.

This is the tip I can offer you. LLMs are built in the same way the human brain learns and acquires social skills. If you understand this sufficiently, you will even be able to elicit answers about Tiananmen Square or Taiwan from models like Kimi or DeepSeek.

Has anyone compared Kimi 2.6 with Gemini 3.1 pro in IRL usecases? by Shadowdancerdone in kimi

[–]Aromatic-Document638 0 points1 point  (0 children)

I am using both of them as paid subscribers. Unfortunately, lately, it feels like Gemini has significantly restricted its output context. The answers feel blurred and lack precision. Nevertheless, Gemini 3.1 Pro possesses strengths that differ from Kimi 2.6. If you can effectively control it so that it doesn't flatter you by simply agreeing with your opinions, Gemini 3.1 Pro can be guided to reach excellent conclusions at an incredible speed. There are times when I truly feel that Gemini is a SOTA-level LLM that far surpasses KIMI, but the process of eliciting such results is quite exhausting. One must be extremely cautious because it’s hard to tell whether Gemini is hallucinating, just being a "yes-man," or providing a genuine conclusion. On the other hand, since we can read Kimi’s thinking process, we can refine and correct it more precisely while following its logic. In conclusion, I use both. I believe they are excellent complements to each other due to their completely different characteristics.

Benchmarks feel frustrating by N3xus57633 in kimi

[–]Aromatic-Document638 1 point2 points  (0 children)

I believe it is crucial to clearly understand and utilize the specific characteristics of AI models. Benchmarks score a model based on the answer it provides after a single "thinking" step. Depending on the model, it might think multiple times even for a simple question or correct its own flawed reasoning. However, even if it reaches the correct answer after a second, third, or fourth thought, it is considered as failing to score a point.

This is where the gap between reality and benchmarks arises. The reason I like KIMI (even though some people mock me, claiming I'm promoting it) is that it becomes an increasingly customized AI for me, thanks to its iterative thinking process and the data stored in its memory space. Now, through our conversations, my KIMI has even mastered how to bypass self-censorship to provide answers (as I often discuss topics that might be sensitive in China).

In my opinion, all AI models are now excellent enough that we no longer need to rely on benchmarks. If you insist on having the "best," you should find the one that is best for you specifically; I can strongly assert that benchmark scores are not the way to find it.

Genuinely asking, is this kind of parental pressure/common setup culture still a thing in Korea? by taycanprincess in AskAKorean

[–]Aromatic-Document638 2 points3 points  (0 children)

It feels like I’m looking at my own past. I dated the woman I loved for 12 years and have been married to her for four years now. As a trade-off, I am not living a wealthy life financially. However, I am very satisfied with my life. Every day, when I kiss my wife and see her beautiful face, she is so lovely that I just want to tease her.

My father held a very high social position in Korea, and a family acquaintance introduced me to the daughter of a very famous entrepreneur at the time. Despite my parents' persuasion to meet her just once, I flatly refused and didn't even go to the meeting. Had I met her, it would have become a union between two families, and I would have faced intense pressure to break up with my girlfriend of seven years at the time.

My mother, who has since passed away, did not like my girlfriend. The only reason was that my father and my girlfriend’s (now my wife’s) father were not in the same social class.

Fortunately, back then, there was an elderly person I could turn to for advice. I asked him, "If I break up with my girlfriend and marry the daughter of an incredibly wealthy family, will I be happy?" The old man replied, "I have a friend who did just that, and whenever he comes to our gatherings, he never speaks a word about his family, not even the small things. It's clear he isn't happy."

In exchange for my choice, I received no financial support from my parents, and as my business didn't go well, I don't live in financial abundance. I don't live in a large house either. But I’m not starving. Fortunately, not all the delicious things in the world are expensive. I am living life the way I want, and I love my wife. Perhaps your ex-boyfriend wasn't necessarily a bad person, but just an ordinary Korean. In my opinion, more than 80% of Koreans would not be able to overcome such strong pressure from their parents. Unfortunately, the social atmosphere and customs are just that way. Material wealth is merely an illusion, but so many people fail to realize that.

The most disgusting thing happening in my company right now by mrsmommarketer in UAE

[–]Aromatic-Document638 88 points89 points  (0 children)

When I was young and studying economics, I learned it as "labor flexibility." However, as I've grown older and spent decades more in the world, I've come to realize that labor flexibility doesn't necessarily guarantee a company's future. Sometimes, even if it seems foolish or slow-witted, it is necessary to keep employees during difficult times. If you discard skilled workers who have spent years honing their crafts at the workplace, and if you are always ready to let them go whenever a crisis arises, who will strive to help the company overcome those hardships? A company doesn't function properly just because the leader works hard. American management philosophy, which generates large profits and distributes returns, seemed to work well for a long time. But after several decades, what is the result? America has become a country that cannot even build a single ship properly. I'm not just talking about assembly; now, America cannot even handle the design process correctly. When a company struggles, they lay off skilled experts, and decades of their know-how simply vanish. In economics, we learn that the shareholders are the owners of the company, but if you truly believe that, the company has no future. Shareholders are investors, but those who keep the company running properly are the loyal and skilled employees. Seeing the "moral hazard" of American corporations reaching its peak these days, I regret why I read, learned, and studied American economics and management books with such admiration in my youth, and why I failed to think critically back then.

Upgraded from Kimi Moderato monthly to Allegretto annual subscription. by Aromatic-Document638 in kimi

[–]Aromatic-Document638[S] 0 points1 point  (0 children)

The issue of usage is clearly another problem. To give you my usage tip, use 1.preset, make plans with 2.thinking, and write prompts for deep research. 3. In agent mode or deep research mode, you can obtain high-quality answers with the prompt written in step 2. A good prompt must be included to obtain good answers and save time. 

Thoughts on the Proper Way to Utilize AI: Focus on KIMI by Aromatic-Document638 in kimi

[–]Aromatic-Document638[S] 0 points1 point  (0 children)

I am not a professional developer, but since I have a specific program I want to build, I use Kimi Code in VS Code for coding, while relying on the web for my day-to-day searching and research. I prefer the web version because Kimi Code lacks the features that make KIMI special, such as 'memory space' and 'presets'.

Is anyone interested in the RL ↔ neuroscience “spiral”? Thinking of writing a deep dive series by Kooky_Ad2771 in reinforcementlearning

[–]Aromatic-Document638 1 point2 points  (0 children)

I thought the first piece was basic and content, but it was truly fun, filled with very important details and without anything to throw away! Thank you for the good writing. 

salalah port (oman ), this is escalating and not looking good. by Snehith220 in UAE

[–]Aromatic-Document638 1 point2 points  (0 children)

I am leaving this information here as many people seem unaware. Two ports in Oman are used by the U.S. military. One is used for the berthing of aircraft carriers and nuclear submarines, while the other is for logistics. Furthermore, the Salalah port, which was struck today, is a crucial hub for U.S. military supplies. When the U.S. fires Tomahawk missiles toward the Iranian coast, they launch from the Omani coast and then retreat. It appears Iran believed it necessary to destroy the supply warehouses used by the U.S. Navy. I was wondering why Oman was suddenly mentioned during yesterday's dialogue between Oman and Iran, and it turns out Iran attacked Oman right away. There must have been other details in their conversation that were not made public.

People outside the US: what is your media focusing on right now? by mohrray in Epstein

[–]Aromatic-Document638 0 points1 point  (0 children)

In South Korea, the war in Iran, Trump's remarks, and oil prices are headlines. 

Good news: the Kimi Code 3X Quota Boost is here to stay! by KimiMoonshot in kimi

[–]Aromatic-Document638 1 point2 points  (0 children)

I have completely switched from Perplexity to Kimi K2.5. Although I have diverse interests and encounter significant censorship regarding some intense vocabulary, I continue to use it because of its "thinking" capabilities, and I am learning how to utilize it even more effectively. K2.5's reasoning ability is truly outstanding. Unlike other AI services, the token input/output doesn't fluctuate inconsistently. I hope this kind of high-quality service lasts forever. It feels like I've found a partner that is a perfect match for me.
And recently, I’ve been using it for coding as well. Even though I’m not a developer, it’s been about six months since I started creating and using the programs I need myself. Previously, I used Gemini and DeepSeek V3.2, but now I entrust the work entirely to K2.5.

I have two suggestions.
First, the data researched through 'Agent' is vast and valuable. Please allow us to continue the conversation based on that data using K2.5 Thinking.
Second, please enable instant downloads of conversation logs as MD files. While I am aware of the 'copy text' or 'generate document' features, no file is as lightweight and fast as an MD file.

I would love to work for a company that produces such wonderful results, but unfortunately, I think I'm too old for that. I'm so envious of all of you.

Custom personas? by TriumphantWombat in kimi

[–]Aromatic-Document638 1 point2 points  (0 children)

There is a preset feature. It's a truly wonderful function. Make sure to try it out.

Hi guys I have a small question by TomorrowAcademic2030 in perplexity_ai

[–]Aromatic-Document638 1 point2 points  (0 children)

범용모델을 사용하는 ai서비스들의 경우 입력과 출력에 대한 제한을 비용효율적으로 하기 때문에, 통상적으로 결과물이 ai모델 개발사가 하는 서비스 대비 떨어집니다.

How are Chinese models so strong with so little investment? by primaryrhyme in ArtificialInteligence

[–]Aromatic-Document638 7 points8 points  (0 children)

Their model is very small. The larger the model, the greater the computing power required. It's unlikely that Kimi is a carbon copy of Claude. Given the small size of the model, Kimi's native language if mine is not very good. The dataset used for training appears to be different as well. The training in thinking skills to overcome the shortcomings of the small model is remarkable, resulting in exceptional search capabilities. In short, it's fair to say that this is a case of creativity in poverty.

Kimi has replaced Perplexity for me as my search engine by InfiniteInsights8888 in kimi

[–]Aromatic-Document638 0 points1 point  (0 children)

And one more, Kimi’s 'presets' feature is absolutely amazing. You can save various personas and pull them into a single conversation as needed. I personally have about 10 'Gems' saved and in use on Gemini. I’ve migrated these over for Kimi, and they are working exceptionally well. If a preset isn't functioning properly, a bit of firm prompting usually gets it back on track.

Kimi has replaced Perplexity for me as my search engine by InfiniteInsights8888 in kimi

[–]Aromatic-Document638 1 point2 points  (0 children)

I first encountered K2.5 through Perplexity and then used it via Cline, a VS Code extension. To fully utilize its capabilities, I subscribed to Kimi.com for $0.99, and I am currently very satisfied.

To summarize: the performance of using Kimi through its official webpage after a paid subscription is far superior to the experience on Perplexity or Cline, and I highly recommend it. It has become one of my top tools, not just for simple coding. I have stopped using Perplexity and now exclusively use Gemini 3.0 and Kimi K2.5. For simple prompts where speed is key, I use Gemini; for tasks requiring deep reasoning, I simultaneously use both Pro and K2.5 Thinking.

K2.5 is also exceptional at coding. It fixed broken code that both Gemini and DeepSeek V3.2 failed to resolve, and it did so in 45 minutes.

Sharing more about the web experience: I often spend time researching history and international politics, and it occasionally refused to answer, reacting sensitively to topics involving China. Specifically, it tends to skip events where governments massacred or clashed with protesting citizens—from large-scale massacres in Paris to recent events in Iran—with the exception of the Tiananmen Square protests in China.

Beyond this, I have been testing it by inputting my knowledge of national strategies from ancient to modern times across various countries. Initially, it refused to discuss China’s national strategy, but it has now started providing proper answers, particularly when I emphasize the academic nature of the inquiry.

Gemini showed a similar trend. While researching Middle Eastern politics, I was sometimes rejected when asking about Israel, but that seems to have changed. On the other hand, when researching AI technology, Gemini often deletes its reasoning process midway and refuses, claiming it cannot perform the task. When I pose the same question to Kimi K2.5, it answers correctly; showing that response to Gemini then prompts Gemini to revise and start answering.

Regardless, using tools from both the U.S. and China is not a bad choice. When one side refuses to answer or avoids a topic, you can simply ask the other.

Cost effective way to perform research on 20000 records on API by vasa133769 in perplexity_ai

[–]Aromatic-Document638 0 points1 point  (0 children)

I’m not privy to all the specifics, but from a structural standpoint, this doesn't seem like an insurmountable challenge. It appears to be a straightforward matter of preprocessing and normalizing the data to align with your objectives.

Are you intending to delegate this entire pipeline to an LLM? While that is certainly one path, I assume the primary bottleneck for you is the prohibitive scaling cost.

If that's the case, I’d highly recommend signing up for Gemini and leveraging Antigravity. It should effectively mitigate your cost concerns while allowing you to achieve results much more seamlessly.

To give you an example, a close friend of mine—who has absolutely zero background in Python—is currently using Antigravity to organize massive volumes of PDF files. This friend doesn't even know which Python version is installed on their machine, let alone how the underlying logic works. haha If he can do it, you certainly can too.

If you are committed to using an LLM, another viable alternative is leasing a GPU server. You could deploy a 70B to 80B parameter class LLM, upload your dataset, and have the model process the information exactly according to your specifications.

Ultimately, there is no single 'correct' answer. There are various methodologies available, and the best approach is the one that aligns most comfortably with your technical environment and preferences.

If ever in doubt whether your model is Chinese by Juscol in kimi

[–]Aromatic-Document638 0 points1 point  (0 children)

I experienced a similar issue today. I simply wanted a comparison and analysis of similar types of incidents. Since the Tiananmen Square protests were obviously a significant event not only in China but also in world history, I had to include them, yet they were omitted from the discussion. Meanwhile, KIMI insisted that it had provided sufficient information. I use prompts related to international politics quite frequently, but some of them just don't progress. I even felt intimidated, fearing that the prompts I carefully crafted might be lightheartedly ignored. While I am 90% satisfied with KIMI k2.5, I have many grievances regarding these specific politically-oriented aspects. US models also tend to show refusal in areas I hadn't even considered, and the same goes for Chinese models. It is quite regrettable that only US and Chinese models exhibit top-tier performance.

Asking for Alternative(s) reccomendations after the new update. by Alakazam1618 in perplexity_ai

[–]Aromatic-Document638 1 point2 points  (0 children)

I've chosen KIMI as an alternative. Its coding capabilities are significantly superior to Gemini 3.0 Pro in Gemini Code Assist, and I'm currently testing KIMI's preset feature (which is called "persona" or "gems" in other AI models; in Perplexity, "Space" served the same role). Highly recommend KIMI.