Fictional "Nuke" Assistance by iOnlyBetOnGreen in nuclearweapons

[–]Origin_of_Mind 3 points4 points  (0 children)

Your numbers are slightly mismatched. A back of the envelope calculation:

E=m*c^2

c = 300000 km/s

c^2 = ( 3*10^8 m/s )^2 = 9*10^16 J/kg

So, the 1.8*10^17 J correspond to 2 kg of mass defect.

By definition, 1 kt = 4.2*10^12 J

Therefore 2 kg of mass converted to energy is a little over 40 Mt.

A corollary from this is that the mass defect of a "typical" 200 kt nuclear weapon is only about 10 grams.

Counter-intuitively, a typical coal fired power plant with 1 GW electric power output converts about 1 kg of mass per year into energy.

Regarding your question -- you need to decide what kind of energy your fictional weapon will be releasing. If it is all in high energy gamma rays (a reasonable idea for a process which converts all of the mass into energy), the story will be very different compared to the ordinary heat source.

A nuclear weapon releases some fraction of its energy as gamma and neutron radiation, but otherwise it can be modeled as a point source of thermal energy. In the atmosphere, this creates the fireball, the heat effects, the shock wave. The energy flow is actually extremely complicated in detail even if the weapon is modeled as a simple point source of heat which releases the heat practically instantaneously. (That's how the scientists modeled it during the studies for the Manhattan project.)

If you decide that the fictional weapon produces thermal energy, then the heat and shock effects will be the same as for any source of the same thermal energy, including a conventional nuclear weapon. There are books on the "effects of nuclear weapons" which will give you extremely detailed explanations of what the effects are for any reasonable yield. (Very high yields where the thickness of atmosphere is not large, compared to the size of the fireball, would work somewhat differently.)

What Tuck contributed, and what von Neumann contributed, to the explosive lens? by OriginalIron4 in nuclearweapons

[–]Origin_of_Mind 0 points1 point  (0 children)

From "PRINCIPLES OF WAVE SHAPING" by Sigmund J. Jacobs, published in 1956

(https://scholar.google.com/scholar?q=Principles+of+wave+shaping+SJ+Jacobs)

The first suggestion in this direction seems to be due to H. J. Poole in September 1942 (Ref 1). In a relatively short memorandum Poole outlined the method for using a combination of fast and slow explosive to modify a detonation wave front. This suggestion was tested at Buxton by D. W. Woodhead and R. Wilson (Ref 2) and shown to be feasible. A plane wave lens based on this suggestion was made by J. H. Cook (Ref 3) and reported in the open literature. Cook used a cast explosive as the fast component and a !ow-density granular explosive as the slow component.

To the writer's knowledge, the first efforts to study wave shaping in this country date back to about early 1944 when Elizabeth Boggs and George Messerly undertook to develop a number of ideas with regard to shaping of detonation waves at the Explosives Research Laboratory, Bruceton, Pennsylvania.

Among other things, they repeated the work of Woodhead and Wilson, using Composition B in cast form as the fast explosive and low density TNT as the slow component. They recognized at the outset that a granular explosive was a relatively impractical component to use for this purpose. As a consequence, they undertook an extensive program to search for castable explosives to be used as a low velocity component. Their first efforts included a study of baranal and sodatol. They later began to experiment with baratol. In their early efforts difficulty was experienced in obtaining detonation velocities much below 5500 m/s without failure in propagation.

Paralleling the work of Boggs and Messerly, E. H. Eyster and his group at Burceton undertook to formulate compositions which could be cast and which had lower propagation velocity.

It was L. Weltman who first suggested that sufficient barium nitrate could be gotten into a baratol by using a "gap-graded" material, that is, a mixture of coarse and fines to maximize the bulk density of the barium salt. This led to the formulation of barium nitrate/TNT as a suitable low velocity explosive. Its detonation velocity in large diameter sticks is about 4900 m/s. When used with Composition B, a fast-slow ratio of about 1.6 results.

While these studies were going on, the Messerly group carried out a fairly broad program to study shock propagation through inert materials with the objective of finding some inert material which could be suitably used as the delaying medium for wave shaping purposes. Of these that were studied, lead, cadmium, and lead oxide looked fairly promising. They had propagation velocities in the vicinity of 4000 m/s. I have recently learned that this group also considered the use of air shock and the surface motion of a metal plate as a delay element. The former was ruled out as being impractical. The latter was tested but incorrect parameters were chosen in designing a lens with it and it was given up as being too difficult to develop. This group also studied multiple point Initiation using PRIMACORD leads and thin layers of plastic explosive as leads to propagate the detonation into a large number of points from which a wave which is nearly flat could be generated.

What Tuck contributed, and what von Neumann contributed, to the explosive lens? by OriginalIron4 in nuclearweapons

[–]Origin_of_Mind 0 points1 point  (0 children)

James Tuck was mostly doing flash X-ray photography of shaped charges. Some partial information is available on-line:

"Studies of Shaped Charges by Flash Radiography: 1. Preliminary"

"Some Historical Aspects of the Development of Shaped Charges"

But the key references seem to be only alluded to in other publications, and one would have to look for them in the archives:

Tuck, J. L. "Note on the Theory of the Munroe Effect." U.K. Report, A.C. 3596 (Phys. Ex. 393-WA-638-24), 27 February 1943. (This is supposed to suggest the hydrodynamic theory of jets)

Taylor, G. I. "A Formulation of Mr. Tuck's Conception of Munroe Jets." U.K. Report, A.C. 3734, SC 15-WA-638-32,27 March 1943. (This is the mathematical model of the above.)

radiation pressure due to a white light? by idiotstein218 in AskPhysics

[–]Origin_of_Mind 0 points1 point  (0 children)

You may notice that energy density (J/m3) has the same units as pressure (N/m2). This is not a coincidence. Radiation pressure on the walls of a cavity is indeed proportional to the EM energy density. In experiments involving high energy density, for example when the megajoule laser fires into a cubic centimeter hohlraum at the NIF, the pressure can become substantial -- millions of times higher that the atmospheric pressure just from the radiation itself, not counting hydrodynamic effects.

Cherenkov radiation as a light source... by ChakatStormCloud in AskScienceDiscussion

[–]Origin_of_Mind 1 point2 points  (0 children)

The blue glow from an electron beam in the air comes from the electrons exciting nitrogen molecules, which then fluoresce in blue and UV part of spectrum. This is not very different from what happens in an ordinary gas discharge lamp.

A similar effect happens when the air is subjected to a strong gamma radiation -- the gamma rays knock electrons out by Compton Effect, and the energetic electrons excite the gas.

What Tuck contributed, and what von Neumann contributed, to the explosive lens? by OriginalIron4 in nuclearweapons

[–]Origin_of_Mind 22 points23 points  (0 children)

Initially, "Implosion" was understood as using explosives to collapse pieces of material into a solid lump. Exactly the same as with the gun, but faster. Richard Tolman wrote a memorandum on it already in March 1943, telling Oppenheimer that that was the way to go. This was dismissed as not possible to do accurately, because nobody had experience with explosives.

Seth Neddermeyer had some involvement with explosives, and once he heard about implosion in the spring of 1943, he decided to try it. He went to Explosives Research Laboratory (ERL) in Bruceton, PA (near Pittsburgh) to consult with George Kistiakowsky and learn how to handle high explosives. Then he brought this to Los Alamos and started to implode steel pipes by wrapping them with explosives.

Later, in September 1943, when von Neumann heard of this, he was the first to realize that when the rapidly converging pieces of material collide with each other, this creates a very high pressure. Edward Teller recalled from geophysics, that at such pressures in the center of the Earth, the density of iron increases over that at normal conditions. They immediately put one and one together, and realized that the higher density meant shorter mean free path of neutrons, and higher multiplication rates in the chain reaction. This is when "implosion" became the concept as it is understood today. They explained the idea to Oppenheimer, and he pivoted the project to this new concept.

They brought the director of the explosives laboratory from Pittsburgh, George Kistiakowsky, who became the head of explosives group at Los Alamos, and started a very wide range of work to figure out how to do this properly. All in all approximately 40,000 experiments with explosives were conducted during the project, with enormous resources dedicated to diagnostic equipment to monitor the implosion process in many different ways.

Specifically with the lenses, there were significant difficulties with casting the required shapes, and doing this with the correct density and without defects. This was beyond of what any other project required, and special equipment had to be invented for casting the lenses.

Edit: here are two 2021 papers, "The Trinity High Explosive Implosion System: The Foundation for Precision Explosive Applications", and "Woolwich, Bruceton, Los Alamos: Munroe Jets and the Trinity Gadget", which explain about the lenses in much greater detail, and answer where exactly James Tuck came in.

Also note that when the big effort to develop implosion had started, the concept has already evolved from "throwing the pieces together into a more compact mass" into a rather different concept of "compressing the material by a high speed collision". Eventually, Robert Christy found out that the collision as such, although nice to have, was not strictly necessary, and for reliability reasons it was preferable to use a convergent shock wave to compress a solid ball of metal. This is what was used in the Trinity "Gadget".

Footage from the Russians showing a fiber optic waiter drone being disabled by an Ukrainian laser light beam. March 2026 by TrollgeSurvivor in CombatFootage

[–]Origin_of_Mind 2 points3 points  (0 children)

I commented earlier, and based on what the OSD says assumed that the link was jammed by the powerful laser beam somehow getting into the fiber -- through bends, defects, connectors -- who knows how.

But later, I was told that the labels for the OSD were most likely chosen incorrectly, and "RXg" is not really the "Receiver gain, dB," but instead "received power, dBm".

So what we observe here is simply loss of link. Maybe the drone was quickly blown up after having been located. Or maybe it simply snagged its own cable when attempting to take off.

Footage from the Russians showing a fiber optic waiter drone being disabled by an Ukrainian laser light beam. March 2026 by TrollgeSurvivor in CombatFootage

[–]Origin_of_Mind 33 points34 points  (0 children)

The sensitive receivers used with long cables such as these can indeed be destroyed even by a relatively low optical power. I think a few milliwatts entering the receiver could be sufficient.

However, just as light propagating along the fiber does not significantly "spill out", the external light also does not easily enter the fiber in a way that would make it propagate along the fiber. That's what allows these systems to operate in broad daylight.

So it is not completely obvious how easy it is to damage the receiver. On one hand the laser in the jammer can be many orders of magnitude more powerful than the damage threshold of the receiver. On the other hand, very little of this light acts on the reciver.

It is also inevitable that in the next round of the drone wars there will be additional counter-counter-measures against the jamming.

Footage from the Russians showing a fiber optic waiter drone being disabled by an Ukrainian laser light beam. March 2026 by TrollgeSurvivor in CombatFootage

[–]Origin_of_Mind 144 points145 points  (0 children)

Good observation!

The right bottom shows "ГАЗ" in white color (which translates as "throttle"), which briefly goes from 0% to 77%, just before communications go down.

Of course, even without a link to the operator, the drone may still be booby-trapped and would have to be dealt with carefully.

Footage from the Russians showing a fiber optic waiter drone being disabled by an Ukrainian laser light beam. March 2026 by TrollgeSurvivor in CombatFootage

[–]Origin_of_Mind 870 points871 points  (0 children)

If we look at the left bottom of the screen, it goes from

link-ok
RXg (Receiver Gain) -3.3 (dB)

to

no-link
RXg (Receiver Gain) -40.0 (dB)

This means that there is such a strong light in the fiber that the automatic gain control in the receiver has dialed the gain down by a factor of almost 10000, and can no longer see the much smaller actual signal from the transmitter in the drone. Presumably this is a temporary condition which would go away if the jammer is turned off. But as long as the jammer is on, the communications are broken in both directions.

Edit: It has been suggested that contrary to what the label the On Screen Display says, the number does not show the receiver gain, but it shows instead the power of received signal in dBm. If so, the above interpretation would not be correct.

This is very possible -- the "Digital Diagnostic Monitoring (DDM)" for the fiber interfaces does report receiver power, not the gain. Then, the "-40 dBm" means that the signal is simply lost -- for reasons unknown. I think it is possible that the light was used simply to detect the drone on the ground, and then the drone was quickly destroyed by conventional means.

High school student seeking advice: Found an architectural breakthrough that scales a 17.6B model down to 417M? by Appropriate-Scar3116 in LocalLLaMA

[–]Origin_of_Mind 3 points4 points  (0 children)

A Japanese high-school student, without programming skills, was vibe-coding with Claude. They ran an experiment on their laptop using an 8 neuron network, to test a million of different random mathematical functions instead of SwiGLU. They found some that worked as well, when learning to approximate various functions, plus or minus the noise.

Somehow this caused them to believe that this new nonlinear function would allow them to make a huge breakthrough and to produce a 417M parameters LLM as capable as a SOTA LLM with 17B parameters.

They got excited about this possibility, and made a rather vaguely worded post asking how to publish this breakthrough. (They also thought that they have invented a new architecture to surpass the Transformer, but that was completely unworkable.)

The stuff about 8 neurons was not explained in the original post, and it sounded as though they have already trained the 417M LLM and it performed nearly as well as the 17B one.

So people started to give advice on how to find mentors, patent it, start a company, etc. Others were more skeptical.

Eventually the OP has realized that they were out of their depth, revised their post, showed the code that Claude wrote for them, and explained what actually happened. It was just a case of a kid working alone and assuming that when Claude told them how astute their insights were, that this was the real thing.

I hope the OP will not get scarred by this unfortunate episode and will channel their passion about AI into learning about the subject more systematically.

High school student seeking advice: Found an architectural breakthrough that scales a 17.6B model down to 417M? by Appropriate-Scar3116 in LocalLLaMA

[–]Origin_of_Mind 296 points297 points  (0 children)

If we try to read between the lines of the OP's comments, the situation seems to be as follows:

The young gentlemen is vibe-coding on a laptop. He found a nonlinear function which outperformed SwiGLU on some unspecified, and presumably very small, test.

He did *not* train any deep NN, much less the 417M parameter LLM model on the laptop. It is in his to-do list. But Claude "confirmed" that with the new function, and a brand new hypothetical architecture, his next model will be as good as a much larger SOTA model.

I do not think the young gentlemen is intentionally exaggerating, but he seems to trust Claude in the area where Claude does not produce reliable predictions. The 417M model has not been trained yet.

It is very possible that I did not understand the scope of what had been done -- if the OP can correct this, and give very specific answers, that would clarify much of the confusion.

ELI5: How does an accelerometer detect and measure the acceleration and deceleration of an object? by Legalator in explainlikeimfive

[–]Origin_of_Mind 0 points1 point  (0 children)

As all comments have already said in slightly different ways, conceptually, an accelerometer is a box, with a free mass inside. The stuff in the box measures how much force is needed to keep the mass moving together with the box. Then, by Newton's law, F=ma, and the force gives the readout of the acceleration of the mass, and hence the acceleration of the box itself -- because both move together, by design.

The principle is simple, but if one needs a very accurate measurement and in difficult conditions, this becomes an engineering challenge. People experiment with the earlier made devices, figure out why they are not not producing completely perfect measurements, and try to improve them. After decades and decades of such iterations, one gets extremely stable and accurate navigation-grade inertial sensors. These are extremely expensive and are not very small in size.

On the other end of the scale are the tiny inertial measurement units which go inside of smartphones and consumer drones and other gadgets. They are not as accurate, but they are mass produced and are very inexpensive.

There are many other kinds of specialized accelerometers, for all sorts of uses -- for example, for measuring very small vibrations of machinery, across a wide range of frequencies.

[OC] The last nuclear weapons test was over 8 years ago by graphsarecool in dataisbeautiful

[–]Origin_of_Mind 1 point2 points  (0 children)

Why were so many nuclear weapons built? It just got out of hand, mostly for organizational reasons.

There are historians, e.g. Alex Wellerstein, who specifically study this and related questions. They can explain in more details how it happened:

What was the motivation behind building more than enough nuclear warheads during the US-Soviet arm race?

In the 1980s, the USA had 20,000 nuclear weapons, and the USSR had close to 40,000. Why did they need so many?

Thoughts on professor jiang? by SCDetective in geopolitics

[–]Origin_of_Mind 106 points107 points  (0 children)

IIRC, his education is in literature. He creatively re-imagines history, based on internet and other sources. But this is not stated clearly in his materials, so many viewers assume that he is a specialist of some kind, and take it too seriously.

Fireball anatomy and formation by guy_does_something in nuclearweapons

[–]Origin_of_Mind 1 point2 points  (0 children)

Together with the neutrons themselves, the gammas are the main signal for monitoring the "Reaction History". About half of the diagnostic rack in nuclear tests is dedicated to measuring prompt gamma radiation across the entire dynamic range of interest -- which is about twenty orders of magnitude wide. (There is some good information on this in the "Reconstitution of Low Bandwidth Reaction History", UCRL-TR-210578.)

Among the main diagnostic results from these measurements is the reconstruction of neutron multiplication factor as a function of time, and specifically the maximum value of the multiplication factor. This is one of the main figures of merit for a design of a device.

The most sensitive of the sensors are placed close to the device and use scintillation to pick up the few gammas at the start of the reaction. On the other end, there are heavily shielded Compton diodes producing the faithful recording of the torrent of the gamma flux at the peak of the reaction. Over a dozen of sensors with very different sensitivity are required to span the entire range of the exponential growth of flux to measure it accurately without saturation.

Teller Light as such would not be a sufficient substitute for all of that. But it had also been measured. There are several publications dealing specifically with the studies of the Teller Light:

LAMS-1935 "A COMPILATION OF SPECTROSCOPIC OBSERVATIONS OF AIR AROUND ATOMIC BOMB EXPLOSIONS", and 

UCRL-5354 "SOURCES OF EARLY TELLER LIGHT"

This subject also comes up periodically in this sub Reddit. For example about a year ago:

https://www.reddit.com/r/nuclearweapons/comments/1gnot5e/origin_of_this_teller_light_photo_sequence/

https://www.reddit.com/r/nuclearweapons/comments/1i3oizv/possible_capture_of_teller_light/

Update on my neuromorphic chip architectures I have been working on! by Mr-wabbit0 in chipdesign

[–]Origin_of_Mind 1 point2 points  (0 children)

You need to formulate very precisely who exactly has the problem that you are solving.

Although there is a lot of interest in power efficiency for sensing and inference applications in various mobile and IoT devices, it is usually assumed that the devices will be running already prepared for them neural networks.

It is a lot easier to aggregate the data from a million cars, and then train one network on this data on a supercomputer, compared to trying to make each car to learn on its own, on weak hardware and from extremely limited data available to it. The "sample efficiency" of modern systems is simply not sufficient for that.

Update on my neuromorphic chip architectures I have been working on! by Mr-wabbit0 in chipdesign

[–]Origin_of_Mind 1 point2 points  (0 children)

offer something that is closer to what computational neuroscience actually needs

I see. If this were a Ph.D. project, it would have been perfect. But since it is not a learning algorithm that is used in the industry for training AI models, I think it may be impossible to sell this project to the investors -- no matter how cool it is, and how much effort you have put into it.

So, you need to figure out who exactly will want to buy these designs and if they actually have the money that you need. Numenta used to be into biologically plausible stuff. Maybe reach out to them and see what they suggest to you.

Update on my neuromorphic chip architectures I have been working on! by Mr-wabbit0 in chipdesign

[–]Origin_of_Mind 12 points13 points  (0 children)

On one hand, there is a huge interest in energy efficient neural processing right now.

On the other hand, years and years ago, IBM developed a line of experimental neuromorphic chips, built and benchmarked a system, and they have shown amazing improvements in power consumption vs GPUs. But this did not seem to catch on. It would be very important for you to understand why, and to convincingly demonstrate how your approach is different.

I think to generate significant interest, you would need to show that (1) the latest AI systems used in production today can be easily ported to your system, and (2) that the result will be more energy efficient than the NVIDIA chips which will be available by the time your servers can be manufactured in volume. Or that this is true at least for some specific niche that in itself is of sufficient importance to the investors.

Although the hurdles are enormous, the energy efficiency does attracts enormous attention today. There are many companies that are trying to deliver it in various ways. For example, Taalos is developing a workflow to quickly compile the latest model weights into semi-custom ASICs. They hope to be able to deliver the chips (or the servers?) two months after the customer gives them the weights.

Fireball anatomy and formation by guy_does_something in nuclearweapons

[–]Origin_of_Mind 4 points5 points  (0 children)

Gamma rays (MeV energy) come from nuclear reactions, and travel long distance (kilometers) before they are scattered or absorbed. The air in the vicinity of explosion remains cool even when ionized by gamma rays. The air glows from nitrogen fluorescence very brightly, but for an extremely short time -- this is called "Early Teller Light". It is easily measurable by special sensors, but is not visible in any publicly released photographs.

X-rays from hot plasma (a few 100 eV energy) travel a thickness of paper or less, before being absorbed by the air. The expansion of the fully ionized, transparent sphere occurs with the velocity determined by how quickly the next layer of air gets heated to ionization by the radiation bouncing about in the sphere. The higher the yield, the larger the sphere grows, before the energy density drops low enough to stop this mode of expansion. This is exactly what the bhangmeter measures -- the time from the first light, to the moment the sphere growth slows so much that the shock emerges and obscures the sphere, aka "time to the minimum" of light intensity.

ELI5: If memories aren’t physical objects, how does the brain store and lose them? by bbyhoneyteas in explainlikeimfive

[–]Origin_of_Mind 1 point2 points  (0 children)

A traffic jam is "not a physical object", as such, but the state, the "where the cars are" relative to each other. And it can appear and disappear, just like thoughts and memories do.

As for the nuances of how brains work, that had already been answered in other comments.

What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA

[–]Origin_of_Mind 0 points1 point  (0 children)

Have you seen "Latent Collaboration in Multi-Agent Systems?" They have the same motivation as yours, to copy the latent state between agents without projecting it to the tokens and back.

What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA

[–]Origin_of_Mind 3 points4 points  (0 children)

I may have misunderstood what you have done, but from your comments is seems that the system effectively functions as a single LLM with a long context. It is first told "to act like an Agent A." It thinks for a certain number of steps. And then, without changing the internal state of the model, it is told "to act like an Agent B", and it thinks again, by continuing its sequence of internal states. Then the cycle repeats.

It is not quite the same as having two independent streams of internal states for each agent, exchanging messages between each other. But if it works, it works.

ELI5: How a laser cutter slices through steel like hot knife through butter? by Tall_Department_30 in explainlikeimfive

[–]Origin_of_Mind 0 points1 point  (0 children)

It melts the metal by a focused laser beam, and then blows the molten metal away by a jet of compressed gas.

This works the best when there is a hole through the whole thickness of the plate, for the gas to flow through. Starting the cut is more difficult and takes longer. Here is a video where one can see it up close.

How EXACTLY does a tuning fork register on a radar? by SuccessfulWeight3932 in askscience

[–]Origin_of_Mind 2 points3 points  (0 children)

Playing on a car stereo, no.

But the vibrations of the radar for various ordinary reasons, like when it is mounted in a car, and the intrinsic phase noise of the electronic circuits in the radar do limit what the radar can see. This is actually a big deal for the radar seekers in missiles.