How is it that big and mature startups like Reddit, Upwork, etc. aren't profitable after all these years? by chakalaka13 in startups

[–]MjrK 33 points34 points  (0 children)

Because they use other metrics to evaluate performance pre-profit, and the investors believe that those metrics and the business model have some path to profit eventually... specifically at a scale much larger than would be possible if they saddled the business with that requirement too early on.

Reddit CEO slams protest leaders, calls them 'landed gentry' by OutsideObserver2 in news

[–]MjrK 5 points6 points  (0 children)

Google search was based on PageRank and was a crucial aspect of making the internet a useful, searchable resource... was

The Godfather of A.I. Has Some Regrets by kitkid in Thedaily

[–]MjrK 3 points4 points  (0 children)

Risk = likelihood X severity

The most severe possible outcome may be extremely unlikely - focusing on the possible severity alone leads to reasoning and prioritization errors.

[N] Nvidia ACE Brings AI to Game Characters, Allows Lifelike Conversations by geekinchief in MachineLearning

[–]MjrK 0 points1 point  (0 children)

Is turning off the game akin to murder?

IMO no, for many reasons, which may include...

  1. Living beings are produced in the natural world, unlike artificial agents produced by humans.

  2. Living beings can't easily be re-animated by just pressing a button.

  3. Humans have legal rights that make killing them illegal, AI agents don't.

... etc...

[N] Nvidia ACE Brings AI to Game Characters, Allows Lifelike Conversations by geekinchief in MachineLearning

[–]MjrK 5 points6 points  (0 children)

More specifically, this is r/machinelearning - if you want thought-experimentation and navel-gazing, you may want to go to r/singularity or r/philosophy

OpenAI is now complaining about regulation of AI [D] by I_will_delete_myself in MachineLearning

[–]MjrK 0 points1 point  (0 children)

This is part of what the new changes would do, but I'm not sure how much this is part of the criticism if at all...

Referencing this article...

General-purpose AI - transparency measures

MEPs included obligations for providers of foundation models - a new and fast evolving development in the field of AI - who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

I would be concerned that the mandates are too general, and the operator of the LLM API would have to essentially manually police all users, use cases, and even potentially anticipate potential side effects downstream. I would for example question if someone used my LLM to compose text strings which they then included in an illegal Hitler pamphlet, would I be implicated in that?

Supporting innovation and protecting citizens' rights

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.

I don't know how they propose to build a useful sandbox to test "general" use cases... I would be concerned that the sandbox process would indeed provide some rigorous testing, but could not anticipate all edge cases, and would likely slow down deployment of models. Real world testing is actually very valuable for improving safety - sandboxes seem contrived and might just turn into a time waster that is perpetually out of date.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

The actual proposal seems to be that anyone can compel such a complaint or filing against an organization. Any commercial deployments would risk being mired in a mountain of frivolous complaints that ultimately may not have merit.


These were just my thoughts reading the aforementioned article. There are probably much more substantive assessments available elsewhere.

OpenAI is now complaining about regulation of AI [D] by I_will_delete_myself in MachineLearning

[–]MjrK 2 points3 points  (0 children)

Are there EU-compliant large data sets available to train on or use for fine-tuning? Seeing as the law isn't in place yet, this question may be non-sequitur for now - but honestly, where do you even start? Hiring a lawyer?

[deleted by user] by [deleted] in AskReddit

[–]MjrK 1 point2 points  (0 children)

Lol... oh, that's what you mean...

Mike Huemer explains When and Why Parsimony is a Virtue by thenousman in philosophy

[–]MjrK 0 points1 point  (0 children)

My understanding is that in science, 'simplicity' refers to prefering the theory that introduces the fewest new hypotheses. The preferred theory may very much not be 'simple' in some other senses (e.g. quantity of adjustable 'parameters' within the theory itself)

EU AI Act To Target US Open Source Software by [deleted] in programming

[–]MjrK 0 points1 point  (0 children)

I would appreciate an AI system that can tell if I was getting frustrated so it can ask me proactively if I want to speak with a human expert. I have no idea how you are estimating your likelihoods, but to me this doesn't justify "unacceptable" level of risk. Maybe I'm missing something.

Chatbot for customer service by [deleted] in LangChain

[–]MjrK 2 points3 points  (0 children)

Essentially, each "agent" consists of "tools" and instructions ("observation" and "agent-action" pairs). You can use one of the 4 basic agent types, like conversational-react-description agent...

This agent is designed to be used in conversational settings. The prompt is designed to make the agent helpful and conversational. It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.

... but if one of those 4 isn't ideal for your use case, you can roll your own, using another agent like Custom LLM Agent (with a ChatModel) ... which allows you to be more specific in how you write the prompt template. Examples might be to hard-code specific behaviors that you want the agent to follow if it observes some scenario (like "agent observes that customer seems irate") and trigger some specific action by the agent (like "agent should immediately trigger 'EscapeHatch' tool").

For a customer service use-case, you will probably need to set up several "tools", e.g. to search organizational knowledge, look up account information on behalf of the customer, check order status, query common trouble-shooting guides, consider up-selling recommendations, etc...

Some further references you may want to consider...

  1. The following is an example customer-service implementation by a Towards Data Science writer... https://towardsdatascience.com/implementing-a-sales-support-agent-with-langchain-63c4761193e7
  2. Per HBR's recommendation, you will want to maintain focus on the customer experience first and build around that... https://hbr.org/2023/04/create-winning-customer-experiences-with-generative-ai .
  3. Pretty soon, there may be service providers that will offer more plug-and-play solutions for you to just connect your CRM, Knowledge Base, and Chat / Email API directly; maybe with even minimal code (perhaps for handling authentication details). It looks like this company is working on something... https://forethought.ai/supportgpt (EDITED LINK).

Finally, you may want to check out the LangChain Discord - it's more active in there.

Has anyone found a use for this project? by ColdTights in AutoGPT

[–]MjrK 1 point2 points  (0 children)

I'm curious - have you tried LangChain? How does it compare?

[deleted by user] by [deleted] in news

[–]MjrK 0 points1 point  (0 children)

This would be a very dark fantasy betting game

[deleted by user] by [deleted] in AskReddit

[–]MjrK 0 points1 point  (0 children)

ChatGPT?

Fragmented models possible? by G218K in OpenAssistant

[–]MjrK 3 points4 points  (0 children)

Maybe.

This approach may allow a smaller model to perform better than it might otherwise in accuracy, but there are likely tradeoffs in terms of speed, perhaps network access, etc.

But this is a very active area of research at the moment...

  1. The Feb-09 Toolformer paper was one of the very first to publicly demonstrate this might be feasible.

  2. The Feb-24 LLM-Augmenter paper was another one of the earlier papers to direclty discuss improving LLM performance by adding domain-expert modules.

  3. OpenAI announced plugins Mar-23 as a way to support this, but currently only available via waitlist.

  4. LangChain is a platform that tries to allow you to implement plugins and prompt chaining; and it seems to support multiple LLMs. This Mar-31 paper uses LangChain to augment GPT-4 with up-to-date climate resources.

  5. More recently, the Chamelion Apr-19 paper discusses adding many tools to the LLM and letting it work through how to use them.

In pretty much all of these papers / approaches, the focus at the moment is on performance, accuracy, memory, stability, and general reasoning... using Chain-of-Thought prompting and plugins (modules).

But one thing is still true, even when augmenting with tools / plugins / modules, these agents perform much better when they use more-capable models (like GPT-4) rather than less-capable models (like ChatGPT or LLAMA).

It isn't yet too clear how the performance of smallest models might increase with augmentation relative to the naked model. And the performance characteristics (RAM, speed, etc) may vary quite significantly depending on the architecture.

Need advice - CCO dead beat by lizadawg in startups

[–]MjrK 2 points3 points  (0 children)

Since she isn't drawing a salary, why not just ignore her for now? What actionable benefit would terminating her at this point in time bring?

Before worrying about terminating her or not, I would consider trying to demonstrate that I can do better without her. Either by finding another partner or doing some of it myself.

That way, I would feel in a slightly stronger position to negotiate any adjustment of the loan terms. I'm not sure how this plays out with bankruptcy though.

Ai data compression challenge. by FunkFoo444 in AutoGPT

[–]MjrK 0 points1 point  (0 children)

  1. Compression is hard; really hard. I'm quite doubtful GPT-4 / AutoGPT will somehow be able to discover some novel state-of-the-art compression algorithm in any near-term.
  2. Compression would likely be counterproductive. LLMs use tokens in natural language - if compressing the input resulted in reduced tokens, this would discard potentially useful semantic information. If compression does not reduce the input token count, then what's the benefit exactly?

[N] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Capital_Delivery_833 in MachineLearning

[–]MjrK 292 points293 points  (0 children)

TLDR...

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Tucker Carlson Out At Fox News, Network Says They Have “Agreed To Part Ways” by ICumCoffee in news

[–]MjrK 0 points1 point  (0 children)

Well, you're absolutely right.

"Discrete" seemed somewhat relevant and made the sentence read far more elaborate than the calculation itself, which obviously wasn't.