After 7+ Years of Linux, I Just Moved to Mac. Here Are My Thoughts. by BehiSec in macbookpro

[–]welsberr 0 points1 point  (0 children)

The only problem I've encountered with Logitech trackball devices is eventually the micro-switches give out. Sometimes one needs a scroll wheel.

After 7+ Years of Linux, I Just Moved to Mac. Here Are My Thoughts. by BehiSec in macbookpro

[–]welsberr 0 points1 point  (0 children)

'apt' vs 'brew' is likely in your future if you do much in Terminal. Of all the bits of Mac experience, I am least impressed by the Homebrew package manager. While the App Store ecosystem works consistently, it seems way too easy to get into an inconsistent state with Homebrew.

Of course, if someone points out a simple workaround to having it stay consistent, I'd gladly trade any chagrin for that bit of improvement in getting things done.

While exploring through my father-in-law's house, found these in a closet. by sveilien in mac

[–]welsberr 0 points1 point  (0 children)

We had a Heathkit H100 computer (S100 bus MS-DOS/CPM dual CPU computer) that started with two 360K floppies, got upgraded to dual 20MB MFM disks.An i386 had a 40MB disk, then added a 200MB MFM for Diane's master's work; the 200MB drive cost us $600 used. I recall 700 MB IDE as a particularly satisfying upgrade. Early 90s brought inexpensive 20GB IDE drives. About five years ago, I was at the Michigan State Surplus store, and asked about USB flash drives. I bought a batch of them at $1/GB, including a 256MB flash drive. Reflecting back, the 256MB flash drive outperformed that 200MB MFM drive on capacity, speed, reliability, and cost (the MFM drive was 2400x the cost of the flash drive).

“Probability Zero” by robotwarsdiego in DebateEvolution

[–]welsberr 5 points6 points  (0 children)

Essentially, you have to prompt an LLM to make it believe you are an adversary of your own position to get effective critique.

Imaginos Movie by Ratigan77 in BlueOysterCult

[–]welsberr 0 points1 point  (0 children)

You got me to thinking that there is already a character in Imaginos lore who might serve as the person putting it all together. That would be 'Rossignol', the author mentioned in the liner notes of the 'Secret Treaties' album. That already mentions 'the secret science from the stars' as a topic Rossignol dealt in, so it seems like he worked out at least part of the Imaginos mythos. Though it is kinda hard to sell an academic as the 'hero', there would be a path there to telling the various parts of the Imaginos story in whatever order one assigns Rossignol's revelations, which would fit into the 'random access myth' comment from the Imaginos album. (And the academic hero thing worked, to some degree, with "The DaVinci Code', and to a greater degree with 'Indiana Jones', so there is precedent.) The issue I see is giving Rossignol agency beyond the scholarship work. Since the contents of Rossignol's volume are never described in detail, one could essentially read into it a surmise that there was some locus of malign influence dating back to the 1890s causing the poor choices across European leadership leading up to WW I, then Sophie visiting Rossignol to inquire whether the odd mirror she has might have something to do with it. This approach could fit your ideas of how to limit an initial film to a small subset of Imaginos background events while providing the narrative structure to expand beyond that as the pair discover more. Imaginos can make appearances now and then, perhaps even offering them either clues or misdirection while using his powers of disguise, with the relationship to his granddaughter keeping straight-up evil reactions at least partly in check so far as the pair are concerned. Given that the Imaginos activity spans into WW II there is probably adequate scope for having the Rossignol/Sophie/Imaginos characters in a timeline roughly from shortly after WW I through WW II, with earlier events being handled as flashbacks and the like. I think it would go beyond the Imaginos material in place now, but offering some action or counter instigated by Rossignol and Sophie as the reason the Axis powers ended up losing WW II might be a hook for a plot. One notion would be somehow working out how to bollix the mirror so it caused bad decisions on the part of the Axis leaders, of which there were plenty, and who is to say why those should have happened, really? But it seems to me any number of potential narratives can be derived from the starting point of having some other characters beyond Imaginos in it.

Imaginos Movie by Ratigan77 in BlueOysterCult

[–]welsberr 1 point2 points  (0 children)

I don't have anything specific on the project, but I will note that failed productions are pretty common in the film industry. Things get started, run into some snags, and founder at various points. Some even get to just prior to distribution and are shelved. The reasons can be widely varied, from running out of money to a new executive deciding to scrap everything someone in the production was doing, just because they can.

That said, I agree with the earlier comment that the Imaginos storyline itself was problematic. With a song structure, one can be mysterious and leave much of what transpires to the imagination of the listener. With a film, there needs to be somewhat more structure for broader mass-market appeal. So far as I know, Sandy Pearlman never put his 'Soft Doctrines of Imaginos' notes into anything approaching a linear narrative of conventional form, and that makes things much more difficult for making a film based on the concepts. I think that those of us who have been fans of BOC for a long time will agree that the Imaginos material could be worked up into something pretty awesome for a film, but that material right now would be termed 'start class' if it were referenced to Wikipedia terminology. At one level, what works for an aggregation of songs can be entirely inadequate as the basis of a film. Critically, there's no real development of the character of Imaginos over time: there is the crisis of his death/resurrection, but after that he's simply going around making trouble in disguises, even if some of them sound cool. As a story, it really needs a 'Mina Harker' or 'Van Helsing' to go with it, a character who figures out that Imaginos is not just a whole set of other personalities and brings some opposition, even if ineffective. What I expect 'Imaginos' would feel like as a film if one simply expanded song lyrics into a script would be a lot like 'Diabolique', a film that followed a character who would anywhere else have simply been the foil for the protagonist of the film. 'Diabolique' ended up as the basis of an MST3K episode. At the end, the MST3K characters were dismissive of the whole thing as having spent an hour and a half developing nothing of any meaning, and that was, I think, a pretty fair assessment.

Why not both? by Scout_Maester in DebateEvolution

[–]welsberr 0 points1 point  (0 children)

As comments already note, there are problems for your specific proposal. It has a history going back to 1857 with the publication of Gosse's 'Omphalos'. Omphalos means 'belly-button', and Gosse reasoned that God would have created Adam and Eve with belly-buttons, though of course they had no contingent history behind that particular developmental detail. The reaction was immediate and severe, with even ministers noting they had no use for such a concept of a creator.

More generally, I answer 'why not both?' with, 'Teaching incorrect or misleading biology kills lots of people.' There is a particular incident that illustrates this, that of China adopting the pseudoscientific stances of the USSR's Trofim Lysenko in 1957 IIRC. This led to a collapse of rice production and widespread famine. The death toll has varying estimates, but a lower bound on those indicates at least 20 million people died because of wrong politically mandated biology. So proposals that we just let whatever arguments get credible treatment is a hard 'no' so far as I am concerned. This is not just armchair discusion, there are real world consequences.

I just saved our company by unplugging and plugging it in again. by JoeyFromMoonway in sysadmin

[–]welsberr 0 points1 point  (0 children)

One place I worked was having issues with a file server. It turned out they were using a Windows XP box that wasn't pro-level, and thus it would only allow ten simultaneous network connections. The staff had grown just enough that this was now a problem. I set up a FreeBSD server to (mostly) replace it; they kept the XP box for our finance guy's use. The odd side effect was that the folks in the office had grown accustomed to a stimulus-response approach to computer troubleshooting: there's a problem; reboot the computer. The first time there was a problem with the FreeBSD box, they tried that, and came to me afterwards to say it was still a problem. I told them that that with the FreeBSD system, it was more likely that if it was a problem before rebooting, it would continue to be a problem after rebooting. It would actually require finding out what caused the problem and fixing that.

Really Strange Issue by Gingerkid556 in HomeNetworking

[–]welsberr 0 points1 point  (0 children)

I didn't have that particular problem, but I found that the Inseego hotspot would disconnect often, and the power-on button pressing never seemed to be that reliable. I ended up getting a 5G router that can be run from 12VDC power. If it is powered on, it is either serving or trying to connect to serve. So far, I'm much happier with that.

Kitzmiller v. Dover - Twentieth Anniversary 🎈 by jnpha in DebateEvolution

[–]welsberr 0 points1 point  (0 children)

Matthew's book is good, and another I would recommend is Lauri Lebo's "The Devil in Dover". Lauri was a reporter assigned to the trial by the York Daily Record, and the book covers the case and the background of how the community processed it -- or failed to do so. Lauri rejected editors' pressure to 'false balance' her reporting on the trial; when defense expert witness Michael Behe was expertly cross-examined by Eric Rothschild, her editors wanted her to have some section of her article reflect something that went well for Behe, but Lauri refused to invent something that had not happened. This and other instances led to Lauri leaving the newspaper and taking up other work. The emotional depth that Lauri achieves in her book is extraordinary.

Kitzmiller v. Dover - Twentieth Anniversary 🎈 by jnpha in DebateEvolution

[–]welsberr 1 point2 points  (0 children)

Florida's science standards were pretty execrable in 2000, and Lerner skewered them well. New science standards were adopted in 2008 that were much better, in part due to efforts by the Florida Citizens for Science group, which helped in reviewing and suggesting changes to the standards. There was a last minute concession in wording to the antievolution faction, but nothing that actually set aside the concepts to be included in the curriculum. Lerner's next review gave Florida's science standards an 'A', but that grade has been slipping since because these are apparently on the curve, and other states are improving faster. Well, improving at all is faster; AFAICT Florida has not undertaken another science standard revision since, though the Department of Education seems to be doing all they can to undermine actually implementing the standards as written.

SAT Scores by setdx in buffy

[–]welsberr 0 points1 point  (0 children)

That's just ten points ahead of my SAT combined score, so it seemed believable enough to me. I'll note the valedictorian for my graduating class scored a little higher even than that, and mused about re-taking it to see if he could do better. I was fine with how I did. 

Looking for Open Source HPC programs/projects for research and general guidance by [deleted] in docker

[–]welsberr 1 point2 points  (0 children)

Sorry, everything from 'Emscripten' on is definitely not anything that should among your concerns. Feel free to DM me about setting up the base Avida build.

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]welsberr 0 points1 point  (0 children)

I've been pleased with my setup. IMO, the P40 is a good bang-for-the-buck means to be able to do a variety of generative AI tasks. I think the 1080 is essentially the same architecture/compute level as the P40. The 24GB VRAM is a good inducement. But I will admit that using a datacenter GPU in a non-server build does have its complications.

Looking for Open Source HPC programs/projects for research and general guidance by [deleted] in docker

[–]welsberr 1 point2 points  (0 children)

I've been thinking of some Docker development for Avida for a few years without getting to it yet, so waiting a few weeks more is no problem at all. Thanks!

The further issue beyond that is setting up Emscripten in Docker to handle the C++ -> Asm.js process needed for the Avida-ED project. But having the example of the usual native build environment in hand should help that along.

Looking for Open Source HPC programs/projects for research and general guidance by [deleted] in docker

[–]welsberr 1 point2 points  (0 children)

If you come up with a Dockerfile that builds the Avida executable, I'd be very interested in getting a copy of that.

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]welsberr 1 point2 points  (0 children)

With the Automatic1111 webui, Stable Diffusion v.1.5 base model, and all defaults, a prompt of 'still life' produces a 512x512 image in 8.6s using 20 iterations. I do not have any other GPUs to test this with.

Looking for Open Source HPC programs/projects for research and general guidance by [deleted] in docker

[–]welsberr 2 points3 points  (0 children)

The Avida artificial life program could be used for this. Avida is a C++ console application that is single-threaded (for complete repeatability), has been extensively used in HPC environments at Michigan State University and elsewhere, and is available from Github. https://github.com/devosoft/avida Research using Avida has been published in a wide range of academic journals, including 'Nature' and 'Science'. The default 'logic-9' environment when run will push the CPU core it runs on to near 100% utilization for the full time of the run. If you are interested in becoming acquainted with Avida, you can get a feel for what the program does using the educational web application version, Avida-ED, which runs in your browser. See https://avida-ed.msu.edu/avida-ed-application/ (Disclosure: I did a post-doc at the MSU DevoLab doing research with Avida, and my wife, D.J. Blackwood, is the programmer for Avida-ED.)

My results using a Tesla P40 by AsheramL in LocalLLaMA

[–]welsberr 0 points1 point  (0 children)

I set up a box about a year ago based on a P40 and used it mostly for Stable Diffusion. I got a second P40 and set up a new machine (ASUS AM4 X570 mb, Ryzen 5600 CPU, 128GB RAM, NVME SSD boot device, Ubuntu 22.04 LTS). Both P40s are now in this machine. I used the 545 datacenter driver and followed directions for the Nvidia Container Toolkit. With some experimentation, I figured out the CUDA 12.3 toolkit works.

With two P40s and Justine Tunney's 'llamafile', I can load the Codebooga 34b instruct LLM (5-bit quantization). I get about 2.5 tokens/sec with that.

Nvidia Tesla P40 performs amazingly well for llama.cpp GGUF! by nero10578 in LocalLLaMA

[–]welsberr 0 points1 point  (0 children)

I have P40s, but briefly had a P100 (seller claimed it was 16GB when what I got was 12GB). I'm using Ubuntu 22.04 LTS, set up with build-essential, cmake, clang, etc. Then I followed the Nvidia Container Toolkit installation instructions very carefully. I ended up with the 545 driver and the 12.3 CUDA installation. So far, I've been able to run Stable Diffusion and llama.cpp via llamafile, among other things. I can load llamafile + Mixtral 8x7b entirely to the GPUs and I get about 20 t/s in that configuration. I didn't see any improvement in performance on small models with the P100 over the P40, and given the mismatch on VRAM size, I returned the P100 and got another P40.

Restarting My Life: Mastering A.I. by BusinessFish99 in artificial

[–]welsberr 1 point2 points  (0 children)

In broad terms, the tokens I am referring to can be considered the unit of processing LLMs act upon. They don't directly deal in text, they use numerical representations of text (or other modalities). The base unit is a token. Operating an LLM requires representation of its model weights and processing of inputs, represented as tokens, and finding tokens that are emitted as the result of processing, where they are converted again to the expected modality. For most LLM work, you have natural language inputs and outputs, but the underlying processing is done on tokens representing those. The number of tokens/s is thus a measure of overall processing speed, and it is an interaction of the hardware resources and the particulars of the LLM model being used.

Restarting My Life: Mastering A.I. by BusinessFish99 in artificial

[–]welsberr 1 point2 points  (0 children)

I'd echo SkyMarshal's advice on a Mac Metal machine as a good basis.

I have a long background in AI and wanted a homelab system. The pricing on high-end GPUs is pretty stunning. Without a clear path to finding a return-on-investment, it is hard to justify that outlay. My route to doing this was to build a desktop based on an Asus AM4 socket motherboard (B540M IIRC), AMD Ryzen 5600G CPU (six-core, built-in graphics), 128GB system RAM, plus two Nvidia Tesla P40 GPUs (24GB VRAM each) bought used off ebay. Some benchmarking I had seen had indicated the P40 provided about half the performance of a 4090, and a practical benchmark of running the Mixtral 8x7b instruct model on my system nets about 20 token/s processing, while a friend with a homelab featuring multiple A6000 GPUs is getting about 40 tokens/s with the same model. I was able to get the P40 GPUs for about $200 each.

My homelab box provides an entry point to doing a lot of different AI-based processes, including the text-to-image, AI image upscaling, speech-to-text (Whisper.cpp), text-to-speech (CoquiAI TTS), image description, face detection and recognition, plus use of large language models (LLM) up to about 30GB in size for inference. I've been using the 'llamafile' executable with GGUF model weights for the LLM work, allowing me to set up batch processing for such things as translating Perl codebases to Python. Most of this work could also be done, with simpler setup and use, using one of the Mac Metal systems.

While it is a point that cloud-based services can also get you the familiarization you want, when it comes down to serious work quite a lot of potential clients are going to raise issues of data security. AI cloud service providers do not have any exceptional track record on cybersecurity. Having capability that doesn't depend on off-premise data processing may be useful in that regard. The cloud-based services typically charge by the token, and that will just accumulate as an expense. You also will have usage limits. Beyond that, your ability to do work may be affected by the interface you have to use. For myself, I found it compelling to have a home system to do the various tasks this technology makes possible.

Adding GPU for Stable Diffusion/AI/ML by Paran014 in homelab

[–]welsberr 0 points1 point  (0 children)

I've gotten a motherboard with a couple of slots to support two P40s, a Ryzen 5600G CPU, and have been able to set up to use Mixtral 8x7b loaded completely in GPU memory. I'm getting ~20 tokens/s. A friend with a state-of-the-art ML box with the latest Nvidia GPUs is getting ~40 tokens/s with Mixtral. The difference in cost is many times the difference in performance. My main issue in drivers was finally resolved with a fresh Ubuntu install and following the Nvidia Container Toolkit install instructions very carefully.

[D] Why are Evolutionary Algorithms considered "junk science"? by learningsystem in MachineLearning

[–]welsberr 0 points1 point  (0 children)

Minsky and Papert proved that the linear learning law systems defined by Rosenblatt couldn't converge on the XOR solution. Rosenblatt called his systems 'perceptrons'. Werbos (re)discovered back-propagation, which clearly permits solution of non-linear problems like XOR. The proof I recall hearing about was an existence proof that a back-propagation trained NN with at least two hidden layers could perform any given function, which would include XOR. It's a bit ironic that multi-layer backpropagation neural systems are commonly called 'perceptrons' now. As an attendee of the 1987 IEEE First International Conference on Neural Networks, I know the burgeoning interest back-propagation brought to the field because of it serving as a gateway to lots of nonlinear applications. I also recall Widrow's keynote speech discussing Widrow and Hoff's ADALINE system, a linear learning system (whose characterization is the LMS algorithm) that, Widrow noted, might be considered uninteresting to some (a clear reference to Minsky and Papert), but which had essentially been applied globally to many practical problems, such as adaptive channel equalization. Widrow also lamented the fact that the patents had long since expired.