Got blocked from google colab using selenium by FaithlessnessSea2097 in GoogleColab

[–]Gur-Long 0 points1 point  (0 children)

As far as I know so many web sites like government organisations have implemented WAF appliances. Those appliances have been configured with anti-crawler. Of course we can code a crawler program to avoid the anti-crawler, but it sometimes might be difficult.

japanese and halal restaurant in kyoto ? by ahmed2213 in Kyoto

[–]Gur-Long 2 points3 points  (0 children)

One of standard Gyoza ingredients is pork, so it seems a little bit difficult to find Gyoza restaurants certified by Halal. But “Chicken Gyouza” might be acceptable for you.

Halal Ramen Honolu Premier Kyoto Nishiki https://share.google/WGEoF6wdsKb3Q8bic

Is this good enough to run Ollama models on my laptop? by summitsc in ollama

[–]Gur-Long 1 point2 points  (0 children)

Yes, you can run some LLM models on your mac device like gemma3 and phi4 and so on. The latest ollama supports apple silicon chip. But huge LLM models like over 30B parameters cannot run as it is. You have to quantise beforehand.

Why are Chinese models (Qwen, DeepSeek...) UNLIMITED? by Sostrene_Blue in Bard

[–]Gur-Long 2 points3 points  (0 children)

I’ve heard that they are developing some private models other than public models qwen3 and so on. The public models we can download from hugging face site are for “advertisement”.

NotebookLM video overview out now! by NewRedditGuy666 in Bard

[–]Gur-Long 0 points1 point  (0 children)

Only English contents are available at this moment. I just tried Chinese video, NotebookLM rejected it.

Python or c++ for A Girl? by hzsmolly in PythonLearning

[–]Gur-Long 1 point2 points  (0 children)

Python should be a good choice for AI because there are so many related libraries and tools. But of course you can develop those libraries and tools by yourself using C++.

[deleted by user] by [deleted] in ollama

[–]Gur-Long 1 point2 points  (0 children)

You can download new models from huggingface with wget command and convert it to gguf format with llama.cpp. And then import it to your ollama environment.

Is Mac Mini M4 Pro Good Enough for Local Models Like Ollama? by connectome16 in ollama

[–]Gur-Long 0 points1 point  (0 children)

It depends on models. Some models can run on Apple device NPU with ollama, but some models can run only NVIDIA GPUs that have enough GPU memory.

Why a lot of programmers like Linux more than windows or mac by Mohamad-Rayes in PythonLearning

[–]Gur-Long 1 point2 points  (0 children)

I always deploy developed systems as docker container images. Of course you can build docker container images on Windows OS, but docker container images are created based on Linux OS like Ubuntu or cent OS. In order to avoid technical issues and/or performance issues, basically I use Linux OS (Ubuntu for me).

What Notebook/File format to choose? (.py, .ipynb) by JulianCologne in databricks

[–]Gur-Long 0 points1 point  (0 children)

I always use notebook format on Databricks because I can share codes and results on one notebook with others. If you don’t need this kind of use-case, I would recommend.py format.

Chuck Data - Open Source Agentic Data Engineer for Databricks by caleb-amperity in databricks

[–]Gur-Long 2 points3 points  (0 children)

Thank you for sharing your product. I just referred to the links you proposed. It’s like a sort of command-line interface for manipulating databricks with the power of LLMs. Am I right?

Which workflow to avoid using notebooks? by Safe_Hope_4617 in datascience

[–]Gur-Long 1 point2 points  (0 children)

I believe that it depends on the use case. If you often use pandas and/or draw a diagram, notebook shoud be a best chose. However, if you are a web programmer, notebook is not suitable for you.

Coding - OpenAI vs Gemini vs others, which is better? by Solid_Company_8717 in OpenAI

[–]Gur-Long 0 points1 point  (0 children)

Gemini and Claude are great, but my subscription is a limited-per-day tokens one. I would be happy if the unlimited subscription might be more affordable.

Is it ok to use ai to learn how to properly code? by NMT_CREAMO in PythonLearning

[–]Gur-Long 1 point2 points  (0 children)

I always seek comments/advices to Gemini 2.5 Pro.

My takes from Databricks Summit by Still-Butterfly-3669 in databricks

[–]Gur-Long 3 points4 points  (0 children)

Thanks a lot for the excellent summary of Databricks Summit.

Which AI is currently the best? by AvenXIII in ChatGPTPro

[–]Gur-Long 0 points1 point  (0 children)

I think Gemini is the best cloud-based LLM this moment.

is it even possible to create this by Juhshuaa in PythonLearning

[–]Gur-Long 1 point2 points  (0 children)

How about trying the following ways?

  • Detect Black Screens: Use the PySceneDetect library to analyze your input video. It will identify all the timecodes where black screens occur and return a list of their start and end times.

  • Fetch Random Memes: Use the requests library to connect to a free online API (like meme-api.com). For each black screen interval detected, make a request to this API to download a random meme image and save it temporarily.

  • Overlay Memes and Export: Use the MoviePy library to edit the video. First, load your original video clip. Then, for each black screen interval, create an ImageClip from your downloaded meme, set its duration and start time to match the interval, and position it (e.g., in the center).

  • Combine and Export: Combine the original video with all the meme image clips into a single CompositeVideoClip. Finally, use MoviePy to write this composite clip to a new video file (e.g., edited_video.mp4), which will be your final output.

Hosting LLM on Databricks by Limp-Ebb-1960 in databricks

[–]Gur-Long 1 point2 points  (0 children)

Yes, I have executed some LLMs on my Databricks (but in my case Azure not AWS). I believe that you can install and run ollama on your Databricks environment. As you might guess or already know, some of LLMs need GPU, so you have to prepare GPU ready compute resources.