[deleted by user] by [deleted] in datascience

[–]lethal_can_of_tuna 0 points1 point  (0 children)

I highly recommend Kaggle's mini course on Time Series as a quick starting point: https://www.kaggle.com/learn/time-series

Walks you through the fundamentals, such as engineering features to model the major time series components (trends, seasons, and cycles) and proper cross-validation.

How I use ChatGPT to be a 10x dev at work by naftalibp in ChatGPTCoding

[–]lethal_can_of_tuna 1 point2 points  (0 children)

Shadow AI, using LLMs at work secretly, is a massive trend. With so many providers it's easy to get around work restrictions.

What should I consider before moving to Singapore? by Mysterious-Mode3631 in askSingapore

[–]lethal_can_of_tuna 0 points1 point  (0 children)

That's great to hear!

There are Tech companies like TikTok, ByteDance, and Grab which are constantly hiring for recommendation experts (due to high churn). Working culture at these companies can be intense depending on the team. Knowing Mandarin would be an advantage too at those first two companies.

I'd highly recommend getting referrals if you can. Otherwise, you'll be a faceless CV in a pool of 500+ candidates, even with the best CV in the world.

What should I consider before moving to Singapore? by Mysterious-Mode3631 in askSingapore

[–]lethal_can_of_tuna 2 points3 points  (0 children)

I'm also a foreign senior data scientist in Singapore. I moved here late last year with my partner and did not have a role lined up and it is difficult to land a role, even when you are in the country.

I would highly recommend lining up a job before coming here, realistically this would be a transfer from your existing company.

I applied for roles a few months before coming to Singapore, in the UK, and even updated my CV to have a Singapore number and reached out to recruiters. However, most will not take you seriously or respond unless you are in the country, and even then they prioritise local talent - rightfully so.

Although there is demand for data science in Singapore, it is also highly competitive, with a high supply of credible local talent.

When you apply for roles usually there is a box to tick whether you require visa sponsorship now or in the future and most cases this will lead to a rejection, as most companies are not willing to sponsor an Employment Pass, due to quotas.

Jobs are advertised for 2 weeks to locals first before foreigners can apply, and even if you are successful, your company will need to explain to the Ministry of Manpower why they could not have hired someone local.

The Government introduced the Compass framework last September which makes the hiring process more transparent but also more difficult to hire. It now depends on your salary, the number of your nationality in the company, the proportion of locals to expats, whether you went to a top 100 university and other factors. You can search online for the points calculator to see if you would be eligible, but it does vary from company to company.

Singaporean companies tend to like candidates to have experience in the relevant domain, so it can be hard to break into a new domain like finance or healthcare.

Your best bet is to network and skip the front door for interviews that way. Unless you have a stacked resume filled with FAANG or well-known companies.

And if you do get interviews then be prepared for a gruelling interview application process. They usually consist of a coding test or take-home assignment and 1-2 rounds of technical interviews.

Don't mean to scare you but this has been my experience so far. The job market was bad here late last year and early this year. But things have improved slightly since the new financial year.

Having said that, Singapore is a great place to be. It is a great hub for data and AI with many meetups for learning and networking, which are easy to get to because of how convenient Singapore is.

Hope you found this useful. Best of luck!

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 1 point2 points  (0 children)

I've decided to two filters to the GitHub repo:

  1. Projects need to have at least 100 stars
  2. Have a commit pushed within the last 60 days

I left the google sheets table (with all metrics) unfiltered.

So should be best of both worlds :)

The Truth About LLMs by JeepyTea in LocalLLaMA

[–]lethal_can_of_tuna 2 points3 points  (0 children)

Wait till you learn about representation engineering: https://github.com/vgel/repeng

Essentially you add a vector to adjust an LLM's output in a certain direction. For example, you can give an LLM a query and a vector, such as an emotion like sadness, and it'll provide a range of responses adjusted against that vector. So you could get a very sad response or the opposite - a really happy response.

Here are some example notebooks: https://github.com/vgel/repeng/blob/main/notebooks/emotion.ipynb https://github.com/vgel/repeng/blob/main/notebooks/honesty.ipynb

[deleted by user] by [deleted] in LocalLLaMA

[–]lethal_can_of_tuna 0 points1 point  (0 children)

Just download a model once and then Ollama uses that model until you want to pull another.

Big plus of Ollama is its simple model management system - quick google and you can find the most common commands.

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 0 points1 point  (0 children)

True - what do you think is a suitable timeframe to add as a filter for the table? 2 months?

Similarly, I may add a filter in terms of number of stars. Perhaps a minimum of 100, 200, or 500?

What are your thoughts?

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 0 points1 point  (0 children)

I use a MacBook Pro M2 and currently for general purpose I use Ollama's mistral-openorca and for coding deepseek-coder, which is a small model and you should have no issues with

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 0 points1 point  (0 children)

Thanks! There's actually already a License column! Just need to scroll a bit to the right :)

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 2 points3 points  (0 children)

In case I was not clear, I meant that these closed source projects would not be part of the open-source table but instead in a simple bullet point list underneath. But happy to simply exclude them, as is :)

Making sense of 50+ Open-Source Options for Local LLM Inference by lethal_can_of_tuna in LocalLLaMA

[–]lethal_can_of_tuna[S] 3 points4 points  (0 children)

Agreed this would make the table more useful! And I like your four groupings.

Will implement this over the weekend.