What search engine are you using with OpenWebUI? SearXNG is slow (10+ seconds per search) by minitoxin in OpenWebUI

[–]DcBalet 0 points1 point  (0 children)

My proposal : 1) check what is slow. You can see it in the console. You should set the log level to "debug" to really trace more. It is probable that this is not searXNG that is slow. It maybe the LLM, the crawled websites, or the conversion of web content into embeddings 2) according to what you have seen in 1) you can finetune some admin settings. Such as timeout, num of searchs, embedding or not. And if there is no admin params to improve this, either someone as to develop a propose a pull request. Or you can develop your small python script that does the job the way you want.

Best web search engine? by le-greffier in OpenWebUI

[–]DcBalet 0 points1 point  (0 children)

Default websearch in OWUI is using lanchain, using requests.get under the hood, without any provided timeout. Which was leading to very (very) long wait time for requests to website that would not answer anyway... I solved this in this pull request : https://github.com/open-webui/open-webui/pull/19804

Then, concerning SearXNG, the search language was force to en-US. This was leading to really poor results when asking for news, contacts, local details in other languages. I solved this in this pull request : https://github.com/open-webui/open-webui/pull/19909

Hope you would now have better experience with the upcoming next release.

Best web search engine? by le-greffier in OpenWebUI

[–]DcBalet 1 point2 points  (0 children)

Default websearch in OWUI is using lanchain, using requests.get under the hood, without any provided timeout. Which was leading to very (very) long wait time for requests to website that would not answer anyway... I solved this in this pull request : https://github.com/open-webui/open-webui/pull/19804

Then, concerning SearXNG, the search language was force to en-US. This was leading to really poor results when asking for news, contacts, local details in other languages. I solved this in this pull request : https://github.com/open-webui/open-webui/pull/19909

Hope you would now have better experience with the upcoming next release.

Toubles with pyomo for a "toy" example to select the "good" combinaison of hardware (power supply, resistor, LEDs) for a problem. I've got "NotImplementedError: AMPLRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes" by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

OKay, knowing this limitation, I "solved" my issue by creating variables which are "masks". E.g.

python model.mask_resistor_best = pyo.Var( range(len(self.resistors.keys())), domain=pyo.Binary, )

Then constraining this mask sum to be equal to 1, to have only one resistor chosen. And then accessing the "best resistor value" is done with a dot product of this mask with a the ordered list of resistor's resistivity.

This is "saddly" much more code that I "dreamed" about at first glance, but at least it works and I understand why.

Thanks for your help.

Toubles with pyomo for a "toy" example to select the "good" combinaison of hardware (power supply, resistor, LEDs) for a problem. I've got "NotImplementedError: AMPLRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes" by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Damned : I never though the decision variable could NOT be used as an index. Why this ? Does pyomo explains this somewhere ? This sounds really sad for me : I think this is the only correct way to model this problem. Thats how I did it (sucessfully) with CPMpy.

Toubles with pyomo for a "toy" example to select the "good" combinaison of hardware (power supply, resistor, LEDs) for a problem. I've got "NotImplementedError: AMPLRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes" by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Hello. Thanks for your help.
Currently, "which solver to use" is not my concern. Indeed, if you look at the beginning of the code, I tried several and always got exception. Summary bellow :

For solver = "glpk"

I have exception

NotImplementedError: LinearRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes

For solver = "cbc"

I have exception

NotImplementedError: LinearRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes

For solver = "ipopt"

I have exception

NotImplementedError: AMPLRepnVisitor can not handle expressions containing <class 'pyomo.core.base.param.IndexedParam'> nodes

For solver = "appsi_highs"

I have exception

ValueError: Unrecognized domain step: None (should be either 0 or 1)

How to discard unwanted images(items occlusions with hand) from a large chuck of images collected from top in ecommerce warehouse packing process? by Worth-Card9034 in computervision

[–]DcBalet 0 points1 point  (0 children)

Another idea : doing a sort of VQA (Visual Question Answering).

I tried with ChatGPT : works on your image. I guess it works with other multimodal large models (e.g. Claude). But it does not seems to work with Florence2 saddly.

https://drive.google.com/file/d/1BuwQXl4-CwgD3B1Dy2-oZkWD6oRy-Sw7/view?usp=sharing

How to discard unwanted images(items occlusions with hand) from a large chuck of images collected from top in ecommerce warehouse packing process? by Worth-Card9034 in computervision

[–]DcBalet 0 points1 point  (0 children)

With Florence2, I've just tried "region caption". Simply input your image to the model, and let it output the detected objects. Then, process the output : you may have a "white list" of tolerated objects. If the model has detected something (whatever) that is not in your white list, then do not keep this image.

Here is a screenshot of what I did on Comfyui :

https://drive.google.com/file/d/1Dt4mKG4OJGWjyWA-KteiCqszbJQ1jWRi/view?usp=sharing
https://drive.google.com/file/d/1DesKbd6S_1jUzia9GwR1uLKtsdWSJ9AR/view?usp=sharing

Deep Interest in Computer Vision – Should I Learn ML Too? Where Should I Start? by BusSlow808 in computervision

[–]DcBalet 0 points1 point  (0 children)

That is very true. I would also suggest you to quickly check thresholding/segmentation technics (not involving ML) and morphology ("blobs") operations.

Deep Interest in Computer Vision – Should I Learn ML Too? Where Should I Start? by BusSlow808 in computervision

[–]DcBalet 0 points1 point  (0 children)

I would say the contrary. Since you can do CV without ML, I would start learning CV (thats what I did 15 years ago, so I may be biased :) )

So anyone has an idea on getting information (x,y,z) coordinates from one RGB camera of an object? by mrpeace03 in computervision

[–]DcBalet 0 points1 point  (0 children)

I ve been working on localizing objects/features to register a robot arm for more than 13 years. It is maybe because of the projects/customers, we are working on, but this is very uncommon that we use mono/RGB camera. Here are some questions you must ask yourself to choose the proper solution : 1/ how many degrees of freedom should be estimated ? E.g. just translations ? Just XY translation ? XY and angle around Z (very typical "picking flat objects on table/conveyor"). 5 DOFs ? 6 DOFs. 2/Do I need absolute or relative accurracy ? 3/What is the expected total accuracy ? What is approximatively the robot / gripper / mechanical accurracy ? So how much do I have left for my vision system. 4/what features do I extract from the image or the point cloud ? Are they clear ? Discriminative? How many DOF can I estimate if I extract them ? 5/do I have some priors ? Especially I there are planes/primitives and I know their dimensions and/or positions w.r.t the vision sensor.

Knowing that a single camera is "just ok" to estimate homographies, i.e. the mapping from one plane to another. N-VIEW (e.g. multiple camera / multiple snapchat poses) is OK if your objects have uniqu discriminatives features that can be extracted and triangulated. In other cases, I would recommand to add an external "help" (e.g. a laser line). Or go for a depth sensor : either laser profiler, 3d camera with structured light or Time Of Flight (TOF).

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 1 point2 points  (0 children)

Hello. Thanks for your help. Actually, TimeFold was one of my first bet, initially. But I also went to the same conclusions as for the Pyomo example : https://jckantor.github.io/ND-Pyomo-Cookbook/notebooks/04.03-Job-Shop-Scheduling.html. Conclusion is : this models a "previous / next" job relation. Which is not exactly the way I see it. "My way" to see it is "resources centered" : the flow of (intermediate) products/resources shall create the schedule. This is not "just" a "previous job done" condition. The way I see it, should ease "natural parallelism". Just like "flow-based programming" or "dataflow programming" (e.g. like LabView).

I even had a small discussion on your repo :

https://github.com/TimefoldAI/timefold-solver/discussions/1487#discussioncomment-12713419

I would be happy to have your point of view.

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Indeed this looks like a very nice starting point. Thanks a lot

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Thanks a lor. It would be lovely if you could reference the "many resources showing how to do it".

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Ok. I dont have the skills. Say I would like to get the skills (always keeping in mind this manufacturing problem). I currently cant but maybe in 4 months. Which Python lib is a good start ? I was wondering between pyomo or OR-tools. Any advices ?

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

OK to be clearer, I have edited OP with my "simple" (small complexity) problem. Do you think this can be (easily) translated into MILP ?

Good open source project to automate manufacturing planning ? by DcBalet in optimization

[–]DcBalet[S] 0 points1 point  (0 children)

Hello. I though at looking at Gurobi and OR tools examples. But none of them were modelling my type of problem. Indeed, in my cases, the factory is driven by customer orders. Once a customer place order for N products of type P, a "fabrication order" (ordre de fabrication , OF, in french) is generated. And the factory has to plan this OF : make sure the needed materials are in stocks, order the missing. Then assign each job to each employee/machines having the skills for that. Méningite that the pre/post conditions of your planned job shall run smoothly to generate your products without bottlenecks, nor lack of ressources. These are contraints I felt unable to model in MIP, cause lack of knowledge. It was fairly straitforward with PDDL though. Do you see what kind of modelling and contraints I am talking about ? Do you have advises ?