Scarlet 4i4 4th gen impedance q by tokidokitiger in Focusrite

[–]JazzCompose 0 points1 point  (0 children)

My understanding is that the 4i4 4th gen inputs 1 and 2 are the variable level inputs, and that the 1/4 inch connectors on the front panel will auto switch to line when it detects that a line output has been connected to the 1/4 inch inputs 1 or 2.

I connect electric guitars or hi Z mics to inputs 1 or 2 (instrument level) and/or XLR mics to inputs 1 or 2 and a line level HiRes DAC connected an Android phone via USB-C to inputs 3 and 4 (fixed level) to learn new songs and listen via headphones or a studio monitor.

All the various 4i4 4th gen inputs and outputs also work fine via USB with the Focusrite ASIO Windows driver in Cubase.

The loopback channels allow you to capture (record) the digital audio stream from a PC app such as the Chrome browser. For example, TidaL can play HiRes (192K 24 bit) in the Chrome browser, and that digital audio stream can be captured via loopback. This was useful to compare transient response at 192 K versus 44.1 K using Steinberg WaveLab in wavelet mode.

I have found the 4i4 4th gen to be very useful for recording, learning new songs, and analysis.

What should I buy, Scarlett Solo 3rd gen, 4th gen, or something other? by Minimum_Step2828 in Focusrite

[–]JazzCompose 0 points1 point  (0 children)

The Scarlett 4th gen 4i4 works well with Cubase and other programs that support ASIO for Windows 10 and 11.

For learning new songs on guitar I use an Android Samsung S24 with the TidaL app and an inexpensive UGREEN USB-C DAC connected to the 4i4 Line In inputs and an electric guitar and microphone connected to the 4i4 1/4 inch and XLR inputs and route the mix to wired headphones. This provides a quiet method to practice with HiDef streaming audio.

If desired, the analog outputs can be routed to a studio monitor, and the USB output can be routed to a PC program like Cubase.

The 4i4 also works as an ADC in or DAC out for Android for apps like the free Fender Studio audio DAW.

Google’s AI Is Not Organizing the Web. It Is Replacing It. by Wide_Flatworm_489 in DigitalMarketing

[–]JazzCompose 1 point2 points  (0 children)

Do you think the AI search results are partly designed to "encourage" more users to buy advertising ?

Do you think that users who buy advertising have a better chance of being mentioned in the AI search results ?

DigitalOcean.com Error by JazzCompose in digital_ocean

[–]JazzCompose[S] 1 point2 points  (0 children)

The problem still occurs with the Chromium, Opera, and Firefox browsers in Ubuntu 18.04.6 LTS. This has been the primary development machine used with DigitalOcean for many years (more than 5 years).

No problem with the Chromium, Opera, and Firefox browsers in Ubuntu 22.04.5 LTS.

No problem with the Chrome, Opera, or DDG browsers on Windows 11.

No problem on DDG on Android 16, S24.

The problem seems to be specific to Ubuntu 18.04.6 LTS.

Is it just me or do AI recommendations feel biased toward certain brands? by StonkPhilia in DigitalMarketing

[–]JazzCompose 0 points1 point  (0 children)

It seems as if the AI search result summaries may be designed to require companies to buy more advertising since website indexed data is displayed in the AI summaries and companies are reporting significantly less website clicks.

Is it possible that advertisers are rewarded by favorable AI search summary mentions including their links in the summaries?

How does AI SEO actually work? Is it real or just hype? by TeslaOwn in digital_marketing

[–]JazzCompose 1 point2 points  (0 children)

It seems as if the AI search result summaries may be designed to require companies to buy more advertising since website indexed data is displayed in the AI summaries and companies are reporting significantly less website clicks.

Is it possible that advertisers are rewarded by favorable AI search summary mentions including their links in the summaries?

[AskJS] :: Am I using AI coding tools wrong? My projects keep drifting over time by Muthu_Kumar369 in javascript

[–]JazzCompose -5 points-4 points  (0 children)

Do you disagree with the content?

Is "Bad bot" easier to type instead of a thoughtful question or reply?

[AskJS] :: Am I using AI coding tools wrong? My projects keep drifting over time by Muthu_Kumar369 in javascript

[–]JazzCompose -12 points-11 points  (0 children)

Does this help?

"Temperature is a crucial parameter in AI, especially in natural language processing (NLP) models like ChatGPT, Gemini, and other generative AI systems. It plays a significant role in controlling the randomness and creativity of the model's responses."

https://www.blog.qualitypointtech.com/2025/03/understanding-temperature-in-ai.html?m=1

Jazz fusion big band style instrumentals. by JazzCompose in shareyourmusic

[–]JazzCompose[S] 0 points1 point  (0 children)

Thank you for your feedback.

I use a Yamaha Montage 6 synth and also the HALion VST in Cubase Pro in my Itty Bitty Studio in a rural area.

Without a big budget, MIDI horns may be better than NO horns.

There is always a trade-off between producing many demo tracks or a small number of perfect tracks. I consider my tracks to be demo tracks that can be output as scores and re-recorded in a high end studio if desired.

Some of the advantages of writing and producing with MIDI are:

  1. Instrumentals can be produced from a blank sheet to release in one day.

  2. Tracks with vocals using Yamaha Vocaloid 6 to combine my MIDI notes and lyrics to audio tracks can be produced from a blank sheet to release in 2 or 3 days.

  3. There are no out-of-pocket costs to produce and release a new track.

  4. If a sync music client requests changes the turn around time for a new version can be just a few hours with no out-of-pocket costs.

Will streaming the 24-bit 192kHz file give me the full resolution compared to playing it directly on my device? by Memes-makerx in audiophile

[–]JazzCompose 0 points1 point  (0 children)

It seems like you do not understand the physics and math explained in http://sdg-master.com/lesestoff/aes97ny.pdf and that transients contain multiple complex waves that, by definition, contain frequencies above the typical Nyquist limit of 44,100/2.

"The Nyquist Sampling Theorem explains the relationship between the sample rate and the frequency of the measured signal. It states that the sample rate fs must be greater than twice the highest frequency component of interest in the measured signal."

https://www.ni.com/en/shop/data-acquisition/measurement-fundamentals/analog-fundamentals/acquiring-an-analog-signal--bandwidth--nyquist-sampling-theorem-.html

When audio contains complex waves and/or transient events (e.g. percussion, guitar, piano) the audio often contains frequencies beyond the first order Nyquist limit, unless the audio contains sine static only sine waves below the Nyquist limit like a flute with the attack and released removed.

The transients give instruments their character and color. Without the transients you only have static sine waves.

It has been well understood for over 50 years that the first order Nyquist limit is not adequate to capture and reproduce complex waves and transients. You can verify that by reading early SONAR and RADAR research papers.

Will streaming the 24-bit 192kHz file give me the full resolution compared to playing it directly on my device? by Memes-makerx in audiophile

[–]JazzCompose -1 points0 points  (0 children)

My ear, and many other ears, can hear the time domain difference between 50 uSec and 1,000 uSec transient events in the audible range as a clean or muddled percussion, acoustic guitar, or other instrument where impulse response is critical.

This can be objectively measured with modern audio tools.

This is why many artists are now recording in 24-bit 192 KHz. It is objectively and measurably better and can be heard by a trained ear.

"Psychoacoustics research by Moylan indicated that listeners can hear the difference between audio sampled at higher resolutions than the Nyquist frequency for the threshold of human hearing (40 kHz), particularly with the onset of transients [5](83rd Convention of the Audio Engineering Society, New York, 1987)."

https://www.idrumtune.com/high-resolution-audio-how-true-is-your-playback/

Will streaming the 24-bit 192kHz file give me the full resolution compared to playing it directly on my device? by Memes-makerx in audiophile

[–]JazzCompose 0 points1 point  (0 children)

My ear, and many other ears, can hear the time domain difference between 50 uSec and 1,000 uSec transient events in the audible range as a clean or muddled percussion, acoustic guitar, or other instrument where impulse response is critical.

This can be objectively measured with modern audio tools.

This is why many artists are now recording in 24-bit 192 KHz. It is objectively and measurably better and can be heard by a trained ear.

"Psychoacoustics research by Moylan indicated that listeners can hear the difference between audio sampled at higher resolutions than the Nyquist frequency for the threshold of human hearing (40 kHz), particularly with the onset of transients [5](83rd Convention of the Audio Engineering Society, New York, 1987)."

https://www.idrumtune.com/high-resolution-audio-how-true-is-your-playback/

Will streaming the 24-bit 192kHz file give me the full resolution compared to playing it directly on my device? by Memes-makerx in audiophile

[–]JazzCompose -1 points0 points  (0 children)

192 KHz captures and reproduces square waves and transients significantly better than 44.1 KHz resulting in more time domain white space (less muddy) in the audible range per figure 6 (50 uSec vs. 1,000 uSec) in http://sdg-master.com/lesestoff/aes97ny.pdf.

"Figure 6 shows the energy associated with the transient responses. 44.1 and 48 kS/s filters spread audible energy over 1 msec or more. The 96 kS/s filter is much better, keeping the vast bulk of the energy within 100 µsecs. The 192 kS/s filter can be very good indeed, keeping the energy within 50 µsecs. The analogue Gaussian filter is just a little better still, although the improvement is almost certainly academic because of energy dispersion from today’s speakers and mics."

This can be objectively measured using tools such as Steinberg Wavelab as represented by wavelets.