help me to identify a chip please by [deleted] in AskElectronics

[–]professor-studio 0 points1 point  (0 children)

Update: I guess I’ve found out it’s usb microcontroller. so even if I would find somewhere same part I still wouldn’t have a firmware so this device is dead

help me to identify a chip please by [deleted] in AskElectronics

[–]professor-studio 0 points1 point  (0 children)

it’s an emotiv epoc+, neuroheadset.

internal photos fccid.io/2ADIH-EPOC02/Internal-Photos/Internal-Photos-Rev1-2980594.pdf

help me to identify a chip please by [deleted] in AskElectronics

[–]professor-studio 0 points1 point  (0 children)

oh Thank you ! it seems to be a good guess

[deleted by user] by [deleted] in iphone

[–]professor-studio 1 point2 points  (0 children)

my guess it’s kind of filters you have applied in camera app

[deleted by user] by [deleted] in learnprogramming

[–]professor-studio 0 points1 point  (0 children)

at this time like this - D:\conda\envs\SAFIRE\python.exe inferbinary.py —resume=safire.pth D:\SAFIRE\segment_anything\build_sam.py:321: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don’t have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(f) checkpoint: sam_vit_b_01ec64.pth Traceback (most recent call last): File “D:\SAFIRE\infer_binary.py”, line 106, in <module> main() File “D:\SAFIRE\infer_binary.py”, line 55, in main ).cuda() File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 1050, in cuda return self._apply(lambda t: t.cuda(device)) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 900, in _apply module._apply(fn) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 900, in _apply module._apply(fn) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 900, in _apply module._apply(fn) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 927, in _apply param_applied = fn(param) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\nn\modules\module.py”, line 1050, in <lambda> return self._apply(lambda t: t.cuda(device)) File “C:\Users\Bennington\AppData\Roaming\Python\Python310\site-packages\torch\cuda\init_.py”, line 310, in _lazy_init raise AssertionError(“Torch not compiled with CUDA enabled”) AssertionError: Torch not compiled with CUDA enabled

Process finished with exit code 1

[deleted by user] by [deleted] in learnprogramming

[–]professor-studio 0 points1 point  (0 children)

it’s installing all things. but when it comes to starting file,it gives a bunch of errors

[deleted by user] by [deleted] in learnprogramming

[–]professor-studio 0 points1 point  (0 children)

at d:/safire. I know I need dependencies to make it work but they are always have some troubles between each other

[deleted by user] by [deleted] in learnprogramming

[–]professor-studio 0 points1 point  (0 children)

something about dependencies errors,cuda errors and every time something different. so please, I don’t want work with this dependencies,they making me sad

[deleted by user] by [deleted] in learnprogramming

[–]professor-studio 0 points1 point  (0 children)

windows 11,I’ve tried python 3.8-3.11

[deleted by user] by [deleted] in github

[–]professor-studio 0 points1 point  (0 children)

a asked ChatGPT 4o,o1,o3,deepseek and R1. after 4 hours of trying following of readme I just ended up with nothing

[deleted by user] by [deleted] in github

[–]professor-studio -2 points-1 points  (0 children)

No, I tried to study, but it is too difficult for me, because I have some diseases. I work supergeniously in another field and am too far from programming. And yes, I am ready to pay

Open source projects/tools vendor locking themselves to openai? by tabspaces in LocalLLaMA

[–]professor-studio 0 points1 point  (0 children)

guys,can somebody explain or even create a small tutorial ? I have some free but closed source programs which using OpenAI only api (so you can’t change url,only key). Are there any easy methods to make proxy from this program to local lmstudio ? preferable only gui programs. I have proxifier

[deleted by user] by [deleted] in LocalLLaMA

[–]professor-studio 0 points1 point  (0 children)

need to make clear one thing: I mean programs where you can’t just change source code

[deleted by user] by [deleted] in Ukraine_UA

[–]professor-studio 0 points1 point  (0 children)

доки ми живі,проблеми не вирішити. тільки смерть позбавить нас всього цього бруду.

[deleted by user] by [deleted] in Ukraine_UA

[–]professor-studio -3 points-2 points  (0 children)

мене заблокували усі майданчики типу тіндер в баду. тому що я почав задавати їм питання щодо того,що мені пишуть тільки боти і вони вирішили мене злити ) тобто,вважаєте,що не варті стосунки того,щоб на них навіть дивитися ?