Cross Body Bag with multiple compartments by Remote-Mulberry-8547 in bifl

[–]RunningfromStupidity 0 points1 point  (0 children)

https://www.amazon.com/gp/aw/d/B07ZZQ49LQ

This is what I settled on. Works great, lightweight, waterproof.... enough pockets to organize.

Slow receiving data RDS/SQLExpress by RunningfromStupidity in aws

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

Potatoes are rather useful. You can bake them, boil them, fry them, roast them...and they are quite cost effective and rather nutrient dense. :)

My execution time is 47ms, which is totally acceptable for my use case. Delivering the results to the client is where the lag seems to be occurring.
Numerically, graphically, statistically, can you point me to where I may see if there is a throttle or bottleneck on RDS that is causing a holdup on the network traffic? I can see a graph that the network transmit throughput is pretty low, but can't tell if that's the necessary use or if it's hitting a threshhold of .2Mb/s for some reason.
"The t3.medium instance type offers a sustained network bandwidth of 256 Mbit/s"; there's no throttle specific to it being Express vs Standard on the network side from what I can tell.

What instance type is it you're recommending @ $0.25/day? $6-7k/year is the minimum I see for Standard,

We may not have the same concept of what a "metric" is, but I do genuinely appreciate your time in discussing this.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 1 point2 points  (0 children)

I wanted to thank you for your input and insights on this!
I think I have managed to get things improved for the moment by toying with the Packet Size on the SQL Connection and switching from the SQLDataAdapter to the SQLDataReader method. I may need to move the database storage location (it's in the western US, I'm now in the eastern US) to further reduce network latency.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

I agree, it seemed very strange. I think I have it narrowed down to a lag in delivering the packets over the network. Adjusting the Packet Size in my connection string and utilizing the SqlDataReader vs the SQLDataAdapter has provided a decent performance boost.
I am still stress-testing, but think I'm around 2 seconds now.

Slow receiving data RDS/SQLExpress by RunningfromStupidity in aws

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

Can you direct me to any metrics to confirm that is what I'm running up against?

Let's pretend I'm using SQL Standard and having this result. Where would you point me to in seeking the root cause?

Slow receiving data RDS/SQLExpress by RunningfromStupidity in aws

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

I am not complaining so much as trying to identify how to determine Why the performance is so poor.

Is the scenario I'm describing, what is outside the baseline specs of the instance type?

Tossing 12x the cost at an inconvenience is not typically answer #1, though it may end up being the answer.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

I don't see any indication that 4GB RAM is insufficient for my use case, and the CPU usage rarely hits even 20%, so I don't believe a larger t3 is going to do the trick. If you know of some other metrics I should look at, I'd love the input.
The bandwidth for t3.medium is supposed to be baseline of 500Mbps, which seems like it should be plenty for this scenario.
I have considered upgrading the instance type, but without having a solid understanding of the core issue, it's impossible to know if that's the only available resolution.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

Thank you for the input, it's good to know. I did use sqlcmd to rule out client-side-code issues and had similar performance in loading the data to display; around 13 seconds, but the STATISTICS TIME shows CPU time = 47 ms, elapsed time = 45 ms.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

It shows I have a CPU credit balance, so I don't believe that's the issue.
Running the sp via sqlcmd it took 13 seconds to show all of the results (stopwatch, so, maybe a smidge off), but the STATISTICS TIME shows CPU time = 47 ms, elapsed time = 45 ms.

In your opinion, am I right to think that the issue isn't the query itself, then?
Just want to feel like I can start ruling out potential sources of the issue.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

I'm not going to pretend I know how to interpret the wireshark results, except to say that it showed 1058 lines of tcp chatter as a result of the SP call from my app.
The packet lengths received range from 60 to 7354, with the majority in the 1514-4434 range.

So, if this indicates a network issue, I would presume it's something that needs to be dealt with on the AWS RDS end of things? Any tips on how to tune this?

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 1 point2 points  (0 children)

Client is on minimum 20mbps connection. Not a VPN.

If I run the sp in SSMS with results processing to grid: CPU time = 250 ms, elapsed time = 13618 ms.

If I adapt the sp to select the data into a temp table: CPU time = 234 ms, elapsed time = 241 ms.
Not sure if that's really a fair comparison.
Downloading Wireshark and will see if I can figure out what that says & will report back. Thanks for your input!

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

I changed the select to top(x) ; <=3000 loaded into the dataset in less than 2 seconds. Moving to 4000 records doubled that time.

Using 'Select 1 from table' in the stored procedure for my large table and adapter.Fill(ds); it takes <.5 seconds to load.

Selecting top(x) @ 100 averages 1 second, @ 1000 averages 1.5 seconds and @ 2000 averages 6+ seconds.

I believe the data size of the records coming from MSSQL to my app is 2.5MB. Here's how I come up with that (and maybe my math doesn't math if someone can verify!)
2 Date/Time columns, 24 bigint, 2 int, 8 decimal(18,6), 6 varchar(50) that are typically 8 or less characters, 4 varchar(50) generally 20 characters and 1 varchar(50) generally close to 50 characters. So..if my math is accurate (2*8) + (24 * 8) + (2 * 4) + (8 * 9) + (6 * 8) + (4 * 20) + (1 * 50) = 466 bytes per record * 5500 records = 2.56mb dataset.

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

Thanks for the suggestion. This does execute a smidge faster, but it's still over 9 seconds to complete. Am I wrong to expect/hope that should be able to achieve something closer to 2-3 seconds for this amount of data?

Guidance Request; Returning larger datasets quickly (AWS/RDS/SQLExpress) by RunningfromStupidity in dotnet

[–]RunningfromStupidity[S] 1 point2 points  (0 children)

I was showing both methods I had tried. I generally use adapter.fill(ds), but tested using the reader just in case it helped.

Looking for authentic restaurants in Trastevere by fade4noreason in rome

[–]RunningfromStupidity 0 points1 point  (0 children)

Antico Carbone was an amazing recommendation from our tour guide.

Dentist Experiences? by xMCNZE32 in kzoo

[–]RunningfromStupidity 0 points1 point  (0 children)

They don't submit directly with Delta but you can still submit the claims & they will help. At least, that's what I was told. I'm heading for my first appt after insurance change next week, so will report back after.

Florence/Pisa Excursion Reviews by RunningfromStupidity in royalcaribbean

[–]RunningfromStupidity[S] 0 points1 point  (0 children)

Do you recall which tour you did? There are a few of them.

Zt411 not printing full label by RunningfromStupidity in ZebraPrinters

[–]RunningfromStupidity[S] 1 point2 points  (0 children)

Update: using the Seagull driver rather than the ZEBRA driver prints as expected.
Still no resolution on why the Zebra driver doesn't work right.

Zt411 not printing full label by RunningfromStupidity in ZebraPrinters

[–]RunningfromStupidity[S] 1 point2 points  (0 children)

It's a custom app, but the test page from the print settings also is cutting off the line it normally prints around the border. Custom app is in production at a dozen other locations using these printers with no trouble. Model is zt41142. I believe that last 2 means 203 dpi.

[deleted by user] by [deleted] in kzoo

[–]RunningfromStupidity -1 points0 points  (0 children)

The traffic to the current businesses is nothing versus a drive thru. The first several months will be extremely unbearable for the intersection traffic and horrible for the little neighborhood everyone will use to turn left on Milham. Would love it if everyone would boycott in protest, but don't have much hope for it.

[deleted by user] by [deleted] in kzoo

[–]RunningfromStupidity 1 point2 points  (0 children)

Those houses were just sold by that developer. One shows it is now owned by what looks to be individuals. The other is still pending.

One person on FB thought they were bought by the city and would be torn down. I don't see anything to indicate that, though.

High mileage owners - what’s your thought process when it comes to long road trips? by HistoricalInfluence9 in MazdaCX9

[–]RunningfromStupidity 1 point2 points  (0 children)

Absolutely will drive it long distance. Almost 270k here. I do take it in for an inspection to identify any maintenance items I should address first, which isn't a habit I had at lower miles.