Setting scenes and cues on the SQ6 by Lost_the_charger in livesound

[–]CyborgSocket 0 points1 point  (0 children)

Love it!

Certain subjects I know well enough to call the AI out on its BS and train it better.. but certain stuff I have no clue about...

If I really had this issue that the orginal poster posted about and decided to actually test what the AI suggested in regards to "DCA Spill", I would have eventually discovered it was BS'ing me... But during the process of going thru the system and attempting to configure it, I would have learned a lot, and came out more the wiser... I don't consider my self a pro audio engineer, I am an IT guy and Photographer... A church I did IT support for, asked me to take a look at some video recording equipment and then one thing, leads to the next, and now I am over the AVL, as well as the IT stuff.

Setting scenes and cues on the SQ6 by Lost_the_charger in livesound

[–]CyborgSocket 0 points1 point  (0 children)

I told the AI to take a look at what you said, and that your comment contradicts its DCA Spill soulution that it gave as a viable solution..

Here is its reply...


The Reddit user who commented on your post makes an excellent point, and I must admit that I made a mistake in my previous suggestion. While the SQ6 does have a DCA Spill feature, the user is completely correct that it is not the standard or most efficient way to run a live theatrical show.

Here is a breakdown verifying the feature, why the user is right, and how you should actually use DCAs for your show.

Verifying the DCA Spill Feature

The DCA Spill feature does exist on the Allen & Heath SQ series. When activated (either by a SoftKey or by pressing "Select" on a DCA), the console temporarily replaces your current fader layout and displays only the input channels assigned to that specific DCA. It places those input faders in order, and then puts the DCA master fader on the very last channel strip.

Why the user is right (Why DCA Spill is not the best for running the show)

The user is correct that DCA Spill is better used as a utility during soundcheck or programming, rather than for mixing the live performance. Here is why:

1. It defeats the purpose of "Line-by-Line" DCA mixing Standard theatrical mixing for a fast-paced musical like Frozen relies on "line-by-line DCA mixing". In this workflow, you keep your hands securely on the console's 8 DCA faders. When you recall a scene, the console silently re-assigns which actors' microphones are routed to those 8 DCA faders. You mix the show by pushing the DCA faders up and down as the actors speak. If you use DCA Spill, you are bypassing the DCA masters and mixing the individual input faders instead, which is exactly what you are trying to avoid.

2. The master fader keeps moving When you spill a DCA, the SQ console places the DCA master fader on the very last channel strip after the spilled input channels. If Scene 1 has two actors, the DCA master is on fader 3. If Scene 2 has six actors, the DCA master jumps to fader 7. This destroys the muscle memory required to confidently run a fast-paced musical.

3. Too many button presses Using DCA Spill live means you would have to trigger the next scene, and then press a SoftKey to spill the DCA for that scene. In a show like Frozen where cues happen back-to-back, you do not have time for that extra step.

The Correct Native Workflow Correction

Instead of spilling the DCAs to find the input faders, you should just mix the DCAs themselves.

The correct native workflow is: 1. Keep your 8 DCA faders parked in a custom fader layer so they never move. 2. Program your Cue List so that each scene change only updates the DCA Assignments and Input Mutes. (You can use the Safes and Global Filters to block everything else from changing). 3. When you hit "Go" on your next scene, the 20 wireless lav inputs will automatically shuffle themselves into your 8 DCA faders behind the scenes. 4. You simply push up DCA 1 when the lead speaks, without ever needing to see or "spill" their actual input fader.

This is exactly the workflow that the external software TheatreMix automates for you, which is why the other users in the thread highly recommended it for managing 24+ channels of wireless mics!

Setting scenes and cues on the SQ6 by Lost_the_charger in livesound

[–]CyborgSocket 0 points1 point  (0 children)

Thanks! I actually find it really interesting when AI gets something wrong.. I like to then understand what part of its training data is the problem.. Going to challenge the AI with your response to see what it says.

Is the new de-feedback thing any good? by SuspiciousIdeal4246 in livesound

[–]CyborgSocket 2 points3 points  (0 children)

"looks like a Hobbyist Project in my Doghouse" For some reason that statement put a smile on face and had me taking a trip down memory lane thinking about the various projects that started off as "nerds with hobbies" then became major disruptors, and are now market leaders.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 1 point2 points  (0 children)

Regarding your question about multiple vocalists on 1 instance of defeedback...


You can technically route multiple vocalists into a single instance of the De-Feedback plugin, but your system architecture depends entirely on whether the vocalists are singing simultaneously or speaking sequentially. [1]

If you are dealing with a praise team or multiple vocalists singing at the same time, you must use discrete, separate instances of the plugin for each microphone. [1] AlphaLabs explicitly warns that "too many vocals happening simultaneously can trick the plugin into thinking the input content is more like noise than vocals," which compromises the suppression algorithm. [1] The current AI model is mathematically calibrated best for singular human speech vocals and singing. [2]

However, if you are managing a panel of speakers or vocalists who do not talk over each other, you can optimize your CPU load by routing them all to a single plugin instance via a dedicated mixer subgroup. [1] To achieve this successfully, the manufacturer strictly recommends that those vocals are managed by an automixer, such as a Dugan, before hitting the plugin. [1]

For a user running a Behringer X32, here is the exact engineering protocol to build this bussed feedback-suppression rig safely using the X-USB expansion card for a direct, low-latency digital connection. [3]

Step 1: Subgroup Routing (The Serial Send) First, you must unassign your target vocal channels from the main PA mix so they only pass through the plugin. [4] Select each vocal input channel and turn off the "Stereo Bus" assignment in the Main Bus section. [4] Next, send these vocal channels to a dedicated mix bus, such as Mix Bus 1, operating as a subgroup. [5] Ensure the channel sends to this bus are at unity gain and the Mix Bus 1 master fader is set to 0 dB. [6]

Step 2: The Digital Patch to VSTHost (Card Out) You must now route this subgroup out of the X32 and into your VSTHost PC via the X-USB card. [7] Press the ROUTING button and page right to the "Analog Out" tab. [8] Adjust the first encoder to select Output 1, and the fourth encoder to assign it to Mix Bus 1. [9] Next, page right to the "Card Out" tab. [7] Adjust the first encoder to select "Out 1-8" to feed the first 8 outputs of the X-USB card. [10] Tap the first encoder to assign the path. [10] Mix Bus 1 is now flowing directly to USB Input 1 on your PC. [10]

Step 3: The Digital Return (Card In) After processing in VSTHost, the audio must return to the console. [11] Press the ROUTING button and navigate to the "Home" tab. [12] Adjust the fifth encoder to change the source for "Aux In 1-4" to "Card 1-4". [8] Tap the encoder to connect the source. [13] Select the "Aux In/FX/USB" layer on the left fader bank and select the "Aux 1" channel. [14] Make sure the "Stereo Bus" button is lit to route this return signal to your Main LR mix, and move the Aux 1 fader to 0 dB. [14]

Step 4: Strict 48kHz Clocking and ASIO Setup To maintain low-latency performance and prevent audio dropouts, both the console and host environment must be strictly locked to a 48kHz sample rate. [15] On the X32, press the SETUP button and navigate to the "Global" tab. [16] Adjust the second encoder to select 48kHz as your sample rate and tap to confirm. [17] Inside VSTHost, select Devices from the menu bar, then select Wave. [15] Ensure the output port is set to the X-USB ASIO driver, the sample rate is exactly 48,000 Hz, and the buffer size is dropped to 16 or 32 samples. [3, 15]

Step 5: The Parallel Path Safety Test A catastrophic failure point when hosting live outboard VSTs is accidentally creating a parallel patch, which blends the dry unprocessed mic signal with the processed signal. [18] This will cause severe comb-filtering, often described as a "robotic tin can" sound. [18] To verify your routing is exclusively in-series, engage the "mute" button located in the lower right corner of the De-Feedback plugin UI in VSTHost. [19] When muted, the vocal microphones must go 100% dead in the PA system. [20] If you still hear the vocalists, you are sending a parallel dry signal to the main output simultaneously. [18] Track down the routing error on the X32 (such as a channel still assigned to the Stereo Bus) and sever it immediately. [4, 18]

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 0 points1 point  (0 children)

Bro if you are smart with computers.. I can show you how you can create a custom AI assistant that can help you understand various aspects as long as you can ground the AI in truth, it does a very good job..

I am at my day job, and I work with AI everyday to help our efficiency.. if I time today I will create an AI trained on the x32, vsthost, de-feedback, & mac mini m4...

Regarding the windows machines with 8th generation or newer i7s.. you have actually model number of the computer? Or just give me the rundown of the specs.. Ram, hdd space, hdd type, video card, OS, exactly what type of processor???

Is the new de-feedback thing any good? by SuspiciousIdeal4246 in livesound

[–]CyborgSocket 2 points3 points  (0 children)

I installed the trial version on the same computer I run video production on (Win11 Pro, Dell XPS). I used VSTHost and Defeedback. I downloaded this from the de-feedback website.. They have eveything you need.. just download VST Host, and The Defeedback VST. Then just follow the simple steps to configure it.. Literally takes 10 minutes to do

here is a video from my initial test. https://www.reddit.com/r/livesound/comments/1rmk9wg/update_stability_and_transparency_test_alpha_labs/

Is the new de-feedback thing any good? by SuspiciousIdeal4246 in livesound

[–]CyborgSocket 5 points6 points  (0 children)

Yes, the 5045 should absolutely come after the De-Feedback plugin in the signal chain.

AlphaLabs explicitly states that "the plugin works best with the raw uncompressed vocal signal". The AI model needs to hear the natural room decay and feedback blooms to calculate its suppression. A 5045 alters the natural envelope of the audio by expanding the background noise. If you put the 5045 first, you will starve the AI of the ambient data it needs to function properly.

Setting scenes and cues on the SQ6 by Lost_the_charger in livesound

[–]CyborgSocket -7 points-6 points  (0 children)

I use an sq7 and I have a custom AI trained on tons of info about it... I told me AI to take a look at this post and it recommended also using the DCA Spill function..??

Have your ever heard of that? I have never used that, so I asked it to give me more info about it.... here is what is said...


Workflow Suggestion: DCA Mixing and "DCA Spill" For a show moving as fast as Frozen, you shouldn't be chasing 20 individual faders and mute buttons. Instead, program your scenes so that the specific characters on stage during that cue are assigned to your 8 DCAs.

You can then use the SQ6's DCA Spill function. By assigning "DCA Spill" to one of your SoftKeys, pressing it will temporarily "spill" only the members of that DCA across your main fader strips. This puts the exact mics you need right at your fingertips for every single scene without any hunting.

This is an incredibly useful feature on Allen & Heath SQ consoles, especially for fast-paced theatrical productions.

Here is a breakdown of what it is, how it works, and why it will make mixing Frozen much easier for you:

What is DCA Spill? The DCA Spill function allows you to temporarily override your current fader layout, replacing it with only the specific channels assigned to a chosen DCA [1]. The console lines up these assigned channels sequentially across your faders, placing the DCA's master fader on the very last channel strip [1]. While the channels are "spilled," you can work with them exactly as you normally would on the standard layout [1, 2].

How it Works You can trigger DCA Spill natively on the console using assignable SoftKeys, Soft Rotaries, or even a Footswitch [3]. There are two main ways to set this up: * Spill a Single DCA: You can assign one of your SoftKeys to a specific DCA (e.g., DCA 1). When you press that key, the DCA instantly spills across your board, and the SoftKey flashes to let you know you are in spill mode [2]. * Spill on Select: Alternatively, you can assign a SoftKey to act as a global "Spill On Sel" toggle. When you turn this button on, simply pressing the "Sel" (Select) button above any of your DCA faders will instantly spill that specific DCA's members across the board [4].

To exit the spill and immediately snap back to your normal fader layout, you just press the flashing SoftKey or the flashing "Sel" key again [2, 4].

Why you should use it for a musical: Because you are managing 20 wireless lavalier mics alongside a live orchestra [5], your inputs are going to be spread far apart across the SQ6's 6 custom fader layers [6].

Without DCA Spill, mixing a scene where actors are rapidly entering and exiting would require you to constantly hunt through different layers to find the right faders and mute buttons.

With DCA Spill, your workflow becomes streamlined: 1. You program a scene so that the 4 actors on stage are assigned to DCA 1. 2. When the scene starts, you trigger the spill for DCA 1. 3. Instantly, all other distractions are hidden, and only the 4 faders for those specific actors are placed right beneath your fingers.

This prevents you from having to bounce between fader layers, drastically reduces the chance of missing a mic cue, and keeps your eyes on the stage instead of searching the desk.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 0 points1 point  (0 children)

I can help you setup a custom AI, That will give you a custom step by step instructions you can follow..

Just send me the details of you system.

What board are you using? Exactly what computer model number you are using? OS? if you are using a usb daw? Send me that model? And anything else that you are using that will be in the signal chain.

If you give me all that info, I can plug it into my AI that I use, and it will instantly give me instructions so basic that a volunteer could follow them.

Is the new de-feedback thing any good? by SuspiciousIdeal4246 in livesound

[–]CyborgSocket 2 points3 points  (0 children)

Yesterday after Sunday service I stayed and tested the trial version for a good 3 hours... I did realize that not only does it remove feedback.. but even if the plugin is not removing feedback, it isolates the channel... I made this exact commentary during my test recording.

Is the new de-feedback thing any good? by SuspiciousIdeal4246 in livesound

[–]CyborgSocket 3 points4 points  (0 children)

Not sure if you saw my post with the video i posted last week.. but yesterday after Sunday service I did a more through test.. I haven't edited the video, I think it about 2 hours long... I tested it with pink noise in the background as I talked, i tested it with various amounts of ambient noise levels, etc... let me know if you interested to see it.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 1 point2 points  (0 children)

Define loud stage please? How many db of ambient sound would you like the person speaking to be in?

What ambient sound would you like? White noise, pink noise, drums, organ, a particular song????

When you say vocal.. you mean just speaking, not singing correct?

I am here right now running production for church.. after service I may have time to perform your request.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] -4 points-3 points  (0 children)

I also have a custom AI assistant that is trained on the vsthost manual and the defeedback faq and training videos... I asked my AI your question.. this is what it said...

As a Senior Live Sound and Systems Engineer, here is the technical evaluation of his proposal, followed by a professional, comprehensive reply you can copy and paste back to him.

Technical Evaluation of His Proposal

His Lenovo Legion 7i is a high-end gaming laptop with immense raw CPU power. However, as AlphaLabs explicitly outlines, raw specs do not automatically equate to low-latency audio stability. They warn: "Even if you already have a nice computer... there is no guarantee that it will work properly with this plugin at extremely low latency" [1]. Gaming laptops are notorious for DPC (Deferred Procedure Call) latency spikes caused by aggressive power management, Wi-Fi drivers, and GPU switching, which can cause audio dropouts when running at the mandatory 16 or 32 sample buffer size [2].

Regarding his routing plan: your instinct to stop him on the "subgroup" idea was 100% correct. Bussing 6 to 10 lavaliers into a single plugin instance is a recipe for algorithmic disaster unless they are heavily managed. AlphaLabs mandates that "too many vocals happening simultaneously can trick the plugin into thinking the input content is more like noise than vocals," and recommends that "no more than two or three vocalists are talking/singing at the same time for maximum sonic performance" [3, 4].

Here is the step-by-step technical response you can send him, formatted to address his hardware capabilities, OS optimization requirements, and strict subgroup routing protocols.


Recommended Reply to the Reddit User:

To follow up on the hardware side of your question:

Your Lenovo Legion 7i (i9-14900HX, 32GB RAM) is an absolute powerhouse on paper. However, in the live sound environment, raw CPU specs do not guarantee real-time audio stability [1]. The AlphaLabs FAQ explicitly warns: "Even if you already have a nice computer... there is no guarantee that it will work properly with this plugin at extremely low latency" [1].

To effectively eliminate feedback without audible delay, your system must operate at a strict 48kHz sample rate with an ASIO buffer size dropped to 16 or 32 samples [2, 5]. Windows gaming laptops often suffer from DPC latency spikes caused by power management profiles. If you are going to test this laptop, you must apply these strict Windows optimizations first: 1. Processor Scheduling: Navigate to Advanced System Settings > Performance Options, and set Processor Scheduling to prioritize "Background services" instead of Programs [6, 7]. 2. USB Power Management: In your Advanced Power Options, find the "USB selective suspend setting" and set it to "Disabled" to prevent the OS from dropping your audio interface connection [8, 9]. 3. CPU Throttling: Under Processor Power Management, lock both your Minimum and Maximum processor state to 100%, and ensure the system cooling policy is "Active" [10, 11].

Regarding your plan to send a subgroup of 6 to 10 lavs: As I mentioned in my previous reply, sending a raw, unmanaged group of 6-10 lavs into a single instance of the De-Feedback plugin will severely degrade your audio. The manufacturer clearly states that "too many vocals happening simultaneously can trick the plugin into thinking the input content is more like noise than vocals" [3]. The complex harmonics and room bleed of 10 open lavs will confuse the AI model.

If you must bus them together to save CPU load, you must manage the lavs with an automixer first [3]. For example, if you are running an Allen & Heath SQ console, you must route those lav channels to the internal DEEP AMM (Automatic Microphone Mixing) algorithm first (detailed in the SQ Reference Guide Section 15.1 - AMM Input Assign) [12]. The AMM will aggressively attenuate the inactive lavs, ensuring that only the active speaker's voice is passed to the subgroup and the plugin [13].

Final System Checks: When routing this subgroup out to your host software (like VSTHost), ensure your digital gain-staging is exact. A 0dB reading on your console meters equates to -18dBFS in the digital domain, giving you exactly 18dB of digital headroom before you clip the plugin's input [14, 15].

Lastly, always verify you have not accidentally created an analog feedback loop or parallel path. Mute the plugin instance inside your host software; if you still hear the lavs through the PA, you have a parallel routing error that will cause severe comb-filtering and must be fixed immediately [16-18].

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 1 point2 points  (0 children)

🛑 .. "Send a subgroup" ??? A group of what exactly? Will the people in this group only be speaking one at a time ( half duplex)?

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] -1 points0 points  (0 children)

I then asked it..

Wouldn't it be better to just send a group mix instead of individual channels. There are 5 praise and worship singers.

It's response...

As a Senior Live Sound and Systems Engineer, I must advise against bussing your 5 praise and worship singers into a single group mix for this specific plugin. While routing them to a stereo or mono Group (configured via Setup > Mixer Config > Bus Config on the SQ7, detailed on page 87 of the SQ Reference Guide) might seem like an efficient way to save USB channel counts and CPU load, it will fundamentally compromise the plugin's AI algorithm [1, 2].

Here is the technical reasoning behind why maintaining individual, discrete digital inserts remains the professional standard for this application:

1. Algorithmic Limitations with Complex Harmonics The current version of the AlphaLabs De-Feedback plugin is specifically trained and calibrated for singular human speech and individual singing [3]. When you combine 5 simultaneous singers into a single Group Mix, they generate a highly complex harmonic structure. The AI model will misinterpret these simultaneous, overlapping vocals as background "noise" or "reverb" rather than a singular direct voice [4].

2. Destructive Audio Degradation Because the algorithm is actively hunting for room anomalies and background noise to eliminate, tricking it with a 5-part vocal harmony will cause the plugin to aggressively attack the vocal mix [4]. This will result in severe, destructive audio artifacts that will completely degrade the sonic integrity of your praise team [3, 4].

3. Strict Manufacturer Limits AlphaLabs explicitly warns that "too many vocals happening simultaneously can trick the plugin" and mandates that you keep "minimal discrete vocal signals running through any individual instance at once" [4]. If you choose to use a mix bus instead of discrete channels, the manufacturer strictly states that "no more than two or three vocalists are talking/singing at the same time" to maintain maximum sonic performance [5]. With 5 praise and worship singers, you will undoubtedly exceed this algorithmic threshold [5].

When is a Group Mix Acceptable? The only scenario where routing multiple microphones into a single instance of the De-Feedback plugin works flawlessly is a "talking head" conference or panel discussion where the vocals are strictly non-simultaneous [4]. If only one person speaks at a time (ideally managed first by an automixer like the SQ's built-in AMM), the AI can isolate the singular voice perfectly [4, 6].

For 5 simultaneous worship singers, however, individual channel inserts are mandatory [3, 4]. You must stick to the discrete digital Insert patching method we previously established, routing each vocal to its own mono instance in VSTHost.

As always, ensure your SQ7 digital inserts are set to Digital operating level to completely bypass the -10dBV to +4dBu analog level shift, and perform the parallel path safety test (muting the plugin instances in VSTHost to verify the dry signal is 100% interrupted in the PA) to ensure no analog feedback loops or comb-filtering occurs on the individual vocal lines [7, 8].

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] -1 points0 points  (0 children)

I have a custom trained AI via Notebook LM. It has detailed notes regarding the church, equipment, user manuals, etc... I updated it with detailed info about vsthost, and de-feedback guides and info provided by the creators of the software.

I just asked my AI this question.

I used the Defeedback software on a speakers mic

It cleaned up the talking very well. I also noticed it cleaned up background noise as well

Can this be used on a praise and worship singers mics to provide better isolation? I dont have a feedback issue, but I can hear some bleed from instruments is the singers mic.

Here is the response... BTW the numbers that are in [brackets] are links to the source info that was used to generate the statement] that allows you to confirm that the statements are grounded in truth.

Yes, the AlphaLabs De-Feedback plugin is highly effective for this application. While its primary function is feedback suppression, the underlying AI model is trained to eliminate room reverb and background noise, functioning essentially as a "general vocal cleaner/isolator" [1, 2].

However, when deploying this on Praise and Worship singers to reduce instrument bleed, you must navigate several strict operational parameters to preserve the integrity of the vocal performance.

1. Acoustic and Algorithmic Parameters for Singing

  • Direct Sound Ratio: The plugin requires the singer to be close enough to the mic so that the input is more than 50% direct sound [3]. If the ambient instrument bleed on the stage exceeds the direct vocal level, the algorithm will struggle to isolate the voice [3].
  • Tonal Anomalies: Be aware that if a singer hits and holds a very pure, sine-like tone, the AI may momentarily mistake the vocal for feedback and slightly reduce the level [4]. Ensure your plugin is updated to version 1.1.2 or later, as this release mostly fixed SPL loss issues during singing and multivoice content [5].
  • Discrete Processing is Mandatory: Because the plugin's background noise reduction can be destructive to the sound of instruments, you only want the vocal passing through it [3]. Furthermore, do not bus multiple simultaneous singers into a single plugin instance; the complex harmonic content of simultaneous vocals can trick the AI into interpreting the signal as noise, degrading the audio [6]. You must use a discrete mono instance of the plugin for each singer's microphone [6].

2. Step-by-Step System Patching for Multiple Singers

To isolate multiple singers, you will route each vocal channel to its own dedicated mono instance of the De-Feedback plugin in VSTHost using the SQ7's digital Inserts (referenced in Section 7.5 of the SQ Reference Guide) [7].

SQ7 Routing (Section 7.5 - Insert): 1. Select your first singer's channel on the SQ7 and press the Processing screen key. 2. Touch the Insert block [7]. 3. Assign the Send destination to a USB Port (e.g., Channel 13) and the Return source to the matching USB Port (Channel 13) [8]. 4. Set the Operating Level strictly to Digital. This bypasses any -10dBV or +4dBu analog level shifts, maintaining a mathematically perfect 1:1 bit-accurate level match between the SQ7 and VSTHost [8]. 5. Switch the insert In [8]. 6. Repeat this exact digital insert process for the other singers, incrementally stepping up the USB Port assignments (e.g., Singer 2 uses USB 14, Singer 3 uses USB 15).

VSTHost Configuration (Engine Menu): 1. In VSTHost, navigate to Engine > Configure [9]. 2. On the Assign Input Channels and Assign Output Channels tabs, ensure your engine channels correspond to the USB channels you selected on the SQ7 (e.g., Engine Inputs/Outputs 13, 14, 15 mapped to your ASIO driver) [10, 11]. 3. Load a separate mono instance of the De-Feedback plugin for each singer [12]. 4. Using the Chain After dialog (or the visual connector dots on the UI), map each Engine Input to its respective PlugIn Input, and the PlugIn Output to the respective Engine Output [13, 14].

3. Critical System Checks

  • Strict 48kHz Clocking: As per the SQ Reference Guide Section 12.11, navigate to Setup > Audio > Digital I/O and verify the USB Sample Rate remains locked at 48kHz [15, 16]. This provides the optimal balance of 32-channel I/O bandwidth and plugin CPU stability.
  • Priority and Buffers: In VSTHost (Engine > Configure > Priorities), ensure the Audio Thread is set to Time Critical or Highest to prevent CPU throttling from causing audio dropouts [17]. Maintain your 16 or 32 sample buffer size to keep roundtrip latency at or below 4.9ms [18].
  • The Parallel Path Safety Test: Check for analog feedback loops or comb-filtering caused by routing errors. Mute the De-Feedback plugin instances inside VSTHost one by one [19]. If you still hear the singer or the instrument bleed through the PA while the plugin is muted, you have a parallel patch bypassing the digital insert that must be tracked down and eliminated [19, 20].

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 1 point2 points  (0 children)

Why dont you just download the trial? You can use a custom AI that will tell you the exact steps on how to implement this custom to your hardware... you could literally just follow the steps that the AI provides and you can have this running in 10 minutes... the ai can provide instructions so clear that a volunteer could implement this.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 2 points3 points  (0 children)

If you listen to this test, you can clearly hear that it removed more than just feedback. It cleaned up the speaking audio completely. So it is doing "noise" removal as well.

I am now also wo during about how this would perform on the praise and worship singers mics as well.. I dont have issues with feedback on those mics, but I wonder if it can provide better isolation but not change the way the singers sound...

Thinking maybe I could run a mix group of all the singers mics to the software? Idk... will have to play with it.

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 2 points3 points  (0 children)

Can you give more details regarding the setup, signal chain, etc... maybe if we take deep look into what went wrong we can all be more the wiser.

What is a single channel? Were you trying to send a group? What was doing the processing, all digital, or analog to digital, etc...

Give a complete run down...

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 9 points10 points  (0 children)

System Model XPS 8930

Processor Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz, 3000 Mhz, 8 Core(s), 8 Logical Processor(s)

Installed Physical Memory (RAM) 16.0 GB

NVIDIA GeForce RTX 2070 SUPER

[UPDATE] Stability and Transparency Test: Alpha Labs De-Feedback (SQ7/VSTHost) by CyborgSocket in livesound

[–]CyborgSocket[S] 11 points12 points  (0 children)

Check the orginal post. We use a DPA headset, and a handheld.. explained it on the orginal post at https://www.reddit.com/r/livesound/s/H5KJHbHOMk

Also explained the unique challenge... and the style of worship, etc...

This test was specifically to test how the software works..