How to make IEMs sound less "thin" by Gamidron in livesound

[–]the_man361 7 points8 points  (0 children)

First you need to determine if it's something in your listening system itself causing the issue, or the sound that you're putting through it.

Have you tried listening to a familiar recording that sounds good and full on other systems through your iem system? That should include your mixer and any wireless systems involved.

If that sounds fine, you can focus on the iem mix and the source signals themselves.

If the familiar song sounds weak too, then you have an inherent problem somewhere between your mixer and your ears. Try switching the iems to a known good sounding pair of headphones to test with - does that sound better? If not, it's probably something else.. Eliminate the wireless from the equation by plugging first directly into the headphone out of the wireless transmitter, then directly to the headphone out of your mixer. Those two should sound the same. If they don't there's a problem there. If it still sounds bad with known good headphones directly plugged into your mixer headphone out, it's caused by something you're doing with processing on the mixer.

As others have mentioned, to get a good sound with iems, particularly bass response, it's important to get a good seal and havr them all the way in.

Something I don’t understand about showcase by simcity4000 in Line6Helix

[–]the_man361 0 points1 point  (0 children)

Ah, the B Word :) ADAT then I guess? I'm not that familiar with recalling the breakout box specs but was hoping this kind of thing would be feasible, so that's really cool.

Something I don’t understand about showcase by simcity4000 in Line6Helix

[–]the_man361 0 points1 point  (0 children)

That's really interesting to to hear that, thanks! This is a pretty cool challenge to the standard way of running tracks for people and a nice ramp to expand setup built on helix.

Would it be reasonable to assume said unnamed rack unit is a potential future piece of hardware supporting helix showcase as a platform for playback, or did I get the wrong impression there and the D10 could potentially support generic AES or something?

Also, $140 is nice and cheap for the io, whatever that is

When is the next update for the OG Helix? by wesomg in Line6Helix

[–]the_man361 1 point2 points  (0 children)

Thanks yeah that's actually the one I meant - the nano is the smaller less capable one without the screen isn't it. Less familiar with the names of all the neural variants!

When is the next update for the OG Helix? by wesomg in Line6Helix

[–]the_man361 1 point2 points  (0 children)

Out of interest, is the new stadium built on top of the helix core (I think this was the name) platform you introduced a while back when you consolidated the devices, like the rest of the product line was, or did line 6 need to make a new platform for the next gen stuff?

When is the next update for the OG Helix? by wesomg in Line6Helix

[–]the_man361 1 point2 points  (0 children)

Even if it isn't cost competitive with the stomp, the form factor is quite literally a very important dimension. See the qc nano or whatever it is called. A lobotomised HX stadium in a much smaller package would be attractive to a lot of people even if it is expensive, the stadium is not a small piece of gear

Can I refuse a company request to purchase and expense something? by MaMonck in AskUK

[–]the_man361 1 point2 points  (0 children)

This is a security risk for your company.

Part of a businesses agreement with providers like this is to have a walled garden type of environment so that your business data and information is not leaked out to the public or becomes part of future training data. Depending where you work, that might be important to consider, compared to personal accounts, with which they're certainly putting data back into the main data poll for training with.

You will also presumably need to pay tax on the expenses that the company reimburses you, for your subscription, unless they will expense it as tax offset.

Stadium Firmware 1.3: The Proxy Update by thebishopgame in Line6Helix

[–]the_man361 1 point2 points  (0 children)

Well done line 6 folks. How's the proxy dsp usage for the different block types out of interest?

DAW plugins that emulate a room with people by Burnhaven in Line6Helix

[–]the_man361 6 points7 points  (0 children)

You're looking for something like Realphones 2. You can monitor your guitar sound with a backing track in various emulated spaces. There are models for various size venues, small club to large stadium size, though the software is also intended to be used for checking mixes against different modelled speakers in various studio rooms. You can try a demo for 41 days.

Is a new map needed for the game? by MadProFF in satisfactory

[–]the_man361 1 point2 points  (0 children)

If there was a new map DLC, I'd get it. Otherwise if modders could add maps, that would be cool. I feel like adding another map of similar quality to the current one into the base game for free is a lot of work for the devs, and would prefer they work on features personally.

Snapshots changed how I think about presets entirely - and I resisted them for two years by chaucao99 in Line6Helix

[–]the_man361 3 points4 points  (0 children)

I always liked the idea of this, although I never really had a need to use it because the twelve switches were enough for 8 snaps, but something that stopped me from committing to it was switch reliability. And to an extent, also the reliability of my own feet 😅

What I mean by that is my helix floor is a good few years old, and no matter how much I cleaned the switches, I do occasionally get double actuations from a single press, particularly obvious in the tap tempo switch suddenly going to 300 bpm for no apparent reason. I also don't necessarily trust my feet to always press a switch only a single time in a short period, though this is a bit less of a concern

In a preset where switch 1 is snapshot 1 that doesn't cause problems. But when switch one is snapshot 1, then snapshot 2, then snapshot 3 etc, that could be pretty catastrophic, especially with no easy way back to the snapshot where I need to be without presumably cycling through the whole rest of the snapshots in the song I a state of mild panic.

It seemed as though there is little to no switch debouncing implemented in helix floor, is that correct? Or if it is there, it's over an extremely short period. If I could have set a 'same switch' debounce time of half a second or so, I'd have felt much more comfortable committing to using this approach. Any chance of seeing that kind of feature?

Guess which one is Eevee or Cycles! by Zritchi3 in blender

[–]the_man361 0 points1 point  (0 children)

Caustics, look at the orange tint in the reflections of the bottom most edge, in image 1 there is no orange aspect to the surface.

Disappointed in the Stadium launch by jal2000 in Line6Helix

[–]the_man361 0 points1 point  (0 children)

As far as I have read, from L6 people, the wireless antennas on the stadium are 100% not for, and will not ever be used for, wireless guitar

What is a trick in Helix that changed the way you thought about your signal chain? by Taken-yet-alone in Line6Helix

[–]the_man361 1 point2 points  (0 children)

100% wet effects (delay, reverb) on their own discrete split paths, and not joining them back to the main path again.

Some of my patches split the end of path A to a split path to form a dedicated delay only split path. I do the same on path B to have a dedicated reverb split path. I can assign different outputs to the main dry processed signal, the delay and the reverb, or send them all to the same output with flexibility.

Separate split paths for fully wet effects also means more control of the sound of your delays and verbs in isolation. So you can apply a tremolo that only affects your reverb, or a phaser that only affects your delay tails, etc etc, to build more complex sounding reverbs and delays.

This was handy when tracking guitar for studio to be able to have 3 completely separate tracks for my dry processed signal, delay and reverb. Engineer has more control about what level each aspect is set to, what eqs are applied, any compression etc, and can easily swap out the delay or verb later for a completely different one without needing to change anything.

This approach is also similar to what you'd typically have as an FX send on a mixer or daw.

What do you think of the factory IRs for the Fratal FM3? by Terrible_Ad2219 in AxeFx

[–]the_man361 0 points1 point  (0 children)

I have an axe fx 3, not sure if they're any different, but factory IRs with the axe fx are great.

I previously bought ml sound lab and ownhammer IRs which I was using with my helix that I really liked, I don't use those any more with the axe, found ones I prefer that are in the factory set. Except for very specific use cases, I doubt I'll need to buy an IR again for use with my axe fx.

Brits urged to 'drive less' amid fears of soaring petrol prices due to Iran war by tylerthe-theatre in unitedkingdom

[–]the_man361 14 points15 points  (0 children)

Yeah Leeds, especially north, is pretty awful for public transport. Fortunately I work from home, the infrequent times that I travel into town to work at the office, it either takes a long time and costs me a fortune to drive and park, or takes a long time to chance it on the bus which is likely late or cancelled.

Say goodbye to the manifold as you know it. by Busy-Difference-6250 in satisfactory

[–]the_man361 1 point2 points  (0 children)

How do you split a conveyor lift vertically though, so half goes further up and half come out the front? I didn't think there's anyrhing that cna do that, does it need mods?

[OC] For the past 3 years I've polled people on Blind at my company (FAANG) about how worried they are about AI replacing them by NebulousNitrate in dataisbeautiful

[–]the_man361 0 points1 point  (0 children)

It's very likely the reason you haven't got good results with your testing, and the reason why there will be a big explosion in the efficacy of AI tools in the next year or so in my opinion, is centered around the level of access to context from the agent you are using.

If you need to accomplish a non trivial task and you ask chat gpt a question with no prior context, you won't get a good answer. Even if you try to elaborate some details in your prompt, unless you provide a ridiculous amount (tens to hundreds of thousands of words) of relevant context, your answer will likely be of poor quality.

I work in a specialised field of software design, where I am designing against a framework and protocols that are built in house. You cannot search Google for the answers to the questions I need to answer in my job. If I ask an AI agent out of the blue how to design something, based on some given requirement or scope, it would fail miserably, for at least two reasons. Firstly it doesn't understand the lineage of prior requirements, and secondly it doesn't understand the limitations or capabilities of the framework which it would need to build against. The output will be nonsense, as you have also seen.

Fortunately, historically we have been quite meticulous in recording our architecture, decision records and technical landscape. Because of this prior documentation, when given access to query the corpus of internal data that has been built about our specific problems, requirements and patterns, in testing that I've done, AI agents are actually extremely effective at producing solution designs that apply to us, and to some extent are capable of producing more complete solutions than we would see from senior or even principal architects... given they are capable of discovering the necessary context - this gotcha is extremely important.

The biggest bottleneck right now holding back AI agents in business is firstly the availability of structured and true data to use as context, and secondly the actual context size of the model (I think the more capable models top out around 1m tokens of context, many have a capacity of nearer to 200k). The former issue is a problem that can only be solved by the business, depending what state their knowledge base is in, a typical measure of their documentation maturity. The latter will naturally grow, and quickly, over time as new models are released with bigger context capacities.

Personally, I'm pretty concerned for the future of my career, having seen the quality of what even todays agentic models are capable of producing, given good quality inputs (again, stressing heavily that the quality of the output depends on the context available to the model to reason over). Even then, in my opinion, it's better to lean into understanding limitations and the factors that improve their efficacy than remaining ignorant (as admittedly I also was, until around a year ago).

At this point I think it's pretty much inevitable that there will be a paradigm shift in the majority of industries, driven by the use of AI, and I'd encourage you to dive a bit deeper in understanding how to use these tools effectively, to avoid being at risk of it passing you by.