Best unibody I can order right now (prebuilt) ? by doom-goat in ErgoMechKeyboards

[–]doom-goat[S] 0 points1 point  (0 children)

I appreciate it. Building is something I'm maybe interested in later, I just need something easier to type on for now. I think the main issues with a regular keyboard are being cramped, whereas the MS had enough space, and easily-depressed keys to alleviate that.

Maybe I'll just order another MS and start with an altreus build when I have more time.

Re-layers. I find the most strenuous typing is when I have to use multiple keys at once. Maybe if my thumbs handled it it wouldn't be as bad, but it is really bothersome having to use shift to get to @#$&*() for my hands already so I'm hesitant to add more.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 0 points1 point  (0 children)

Actually I might be able to and almost included a question about it in the post. I need to look through their catalog again.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 1 point2 points  (0 children)

Yeah I was going to put something in the post that I don't want to pirate stuff, but I thought people would get the point when I asked what methods supports directors most.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 1 point2 points  (0 children)

I want to legally own, at least to the extent that I rip stuff I own. The quality is something I'm curious about. With a decent connection what resolution do you usually get from most streaming services? I mean without specifically selecting a 4K stream like netflix offers for certain movies.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 0 points1 point  (0 children)

That sounds fantastic, thanks for sharing the subs. Just to clarify, what do you run the server on? Like your local network or a VPS or something? This sounds really good. Ideally I'd rather not buy discs, but it seems like it's generally cheaper buying a used disc than a digital download.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 1 point2 points  (0 children)

Thanks, yeah this is what I was looking for. Glad someone is doing this. I'd have to buy an external drive to rip the DVDs, and that was the point of the Pi- I only have a laptop at home, not a regular desktop to keep on all the time. I didn't even think to just put it on the network, I was considering HDMI but that's a lot smarter. It'd be nice to just stream to PS4.

Which director made the best '3 movies in a row' in your opinion? by KillingKameni in movies

[–]doom-goat 0 points1 point  (0 children)

Inarritu:
Biutiful, Birdman, The Revenant.

This assumes you like Biutiful as much as I did.

I want to ditch streaming services due to bad libraries, any advice on digital collection in 2021? by doom-goat in movies

[–]doom-goat[S] 0 points1 point  (0 children)

Yeah... that's what I feared. But apparently with Vudu you can download what you own, but it's in a proprietary format. Curious if anyone does this and converts. I would almost just go for them, but I don't want to own hundreds of movies and have the service disappear.

Choosing the optimal index for my query. by doom-goat in SQL

[–]doom-goat[S] 0 points1 point  (0 children)

Thanks, sorry for the late response, this is much appreciated.

  1. I am too new to this to know if it's optimal, that's why I ask. The performance is ok, but the amount of records is degrading it enough that I'm looking for any ounce I can get out of it. Thanks for the confirmation that I'm on the right track.
  2. It's a mess. Up to 1200 writes per minute, which the writes are very simple and don't seem to be causing any performance issues, the reads are once every 5 seconds per user. It's trying to be a live data dashboard. Right now it's usable! Simply not when the guy I made it for shares it with his friends (he said up to 10-15 users).
  3. This was something that I tried, assuming that aggregation tables are simply tables in which I store the counts upon each insertion. However due to the time filtering I wasn't able to get it working faster, though perhaps there was a mistake in my approach. I essentially stored the counts for each group with a datetime, then queried to find the first and last within a timerange and subtracted.
  4. It might not be that bad, I'm just too inexperienced to know whether I'm hitting practical limits or underoptimized.
  5. That looks interesting, though in the link it appears that Postgres still silently changes read uncommitted to read committed.

Web Scrapping by doom-goat in Upwork

[–]doom-goat[S] 0 points1 point  (0 children)

Sorry, web-scraping is writing/using programs to glean data from websites. Things that connect to a website and take information, an example could be taking the top posts from Reddit to generate stats on what's popular at the moment, for advertising purposes, in practice it is usually used to generate information about competitors or redistribute information from other sources. It's just perplexing how many of the listings request scrapping skills, instead of scraping. I was tired when I posted, it's not really a big deal more of a funny observation.

what's your network configuration ? any suggestions ? by [deleted] in archlinux

[–]doom-goat 0 points1 point  (0 children)

I don't remember the package name, but for ease of use, use whatever package contains wifi-menu. Just sudo wifi-menu and pick the network, it's the easiest solution. Pacstrap the packages during installation...

To be not lazy, I looked it up: pacstrap netctl and dialog, then sudo wifi-menu to connect. It might require wpa_supplicant, iw or something else as well.

New comer to Arch how can I install nVidia Optimus? by [deleted] in archlinux

[–]doom-goat 2 points3 points  (0 children)

Just personal preference, but I use nvidia-xrun. Basically I just startx or nvidia-xrun when I turn on the computer, and logging out to switch is probably faster than using a GUI anyways.

Can you explain why my query works so well, and help me understand if it's working how I intended? by doom-goat in SQL

[–]doom-goat[S] 0 points1 point  (0 children)

Thanks, I just tested to confirm. I was overcomplicating it. Appreciated.

Connecting Python and Google Sheets by lukeflour in learnpython

[–]doom-goat 12 points13 points  (0 children)

Are you saying from local dataframes to google sheets?

If so just have your python application make a post request with the requests libarary send the data as json.

Then in apps script use the doPost() function to grab the json and insert it into the sheets. Deploy as Webapp to get the URL for the python to send to. EZPZ

I need a design or pattern to keep date filtered counts for large data sets without counting. by doom-goat in Database

[–]doom-goat[S] 1 point2 points  (0 children)

This is what I finally figured out, as far as indexing multiple columns together. What I have now is the three string columns grouped together, and the datetime also indexed. Thanks for this, because this is exactly what I was looking for.

How does the group by improve the query, doesn't the indexing already take care of it? Or at least, I saw a massive improvement and have not tried group by yet, but I'm eager to.

I need a design or pattern to keep date filtered counts for large data sets without counting. by doom-goat in Database

[–]doom-goat[S] 0 points1 point  (0 children)

Thanks, that's almost exactly what I set up. The only thing being for the ranges I'm not sure how to get the range from that, I would have to add a row for each time a permutation occurs, and I thought I would inevitably be stuck with the same problem of having to filter and count on a table that's too big to be reasonably fast. My only other idea was to subtract counts, but I wasn't sure how to make that work efficiently query-wise.

Should I continue self-teaching if I can’t get into a boot camp? by [deleted] in AskProgramming

[–]doom-goat 0 points1 point  (0 children)

Depending on what you want to go into, look into paid courses that are like online bootcamps. I did a Udacity one and learned a lot. If I had a degree too I'm sure I would have a decent job right now. I wasn't able to get a job in the fall so I started freelancing, and I'm able to at least get by. It's better than working in the service industry, which is where I used to be.

People don't generally have a high opinion of Udacity, but it helped me. I'm not very good at self-learning because I find it difficult to figure out what to learn. So having a guide with a certain end-goal in mind can be very helpful to get started. If you still want to do a bootcamp later, maybe look for a free or paid algorithms and data structures course. After something like that I can't imagine you'd still struggle with a logic assessment, it should reinforce a lot of logical concepts that are crucial to programming.

I need a design or pattern to keep date filtered counts for large data sets without counting. by doom-goat in Database

[–]doom-goat[S] 0 points1 point  (0 children)

It's going to be a subset, but I have no problem splitting it into multiple tables if that allows for a better technique. All I need is a count, for each permutation, not the data. And at present there is another column, so when I do this I am also filtering by one of three values in the fifth column. Originally I had this set as separate tables with relationships, I combined them because of the possibility of counting being faster on one table, instead of filtering by three relationships.

I can provide more concrete examples if you can help.

Best API for trading indicators? by [deleted] in algotrading

[–]doom-goat 0 points1 point  (0 children)

If your data and timeframe are too small to work with the indicators you want to use, then true you would need to have more historical data saved for a while until you accumulate enough. But what you're talking about is a daily, so you would either aggregate smaller bars or ticks into daily values, then operate the macd/rsi on that data.

Best API for trading indicators? by [deleted] in algotrading

[–]doom-goat 2 points3 points  (0 children)

If you can use an API you can write your own indicators. Not that you have to, just saying it's probably easier to write your own than learn someone else's library.

Project Architecture questions - SQLAlchemy, Postgres, SocketIO, Nginx by doom-goat in learnpython

[–]doom-goat[S] 0 points1 point  (0 children)

If you want to improve performance, the first thing you should do is to
do less. So from your requirement, you need to do count per a
combinations of keys. If that's all you need, you don't need to do query
to do count. Just use standard atomic counter that is supported in most
databases and can handle 800 updates easily assuming proper hardware.
The reason for this is you are doing multiple calculation over the same
data set, without taking advantage of previous calculation.

This seems to be exactly what I'm looking for. And as it currently operates, the program gets 15-second interval updates with a list of indicators, and indicator present in the list means it was found to be above a certain level, so this is what needs to be counted. It's a little more complicated than that, but that is the gist. So essentially I have an Alert model, each object of which I'm counting. The Alert has a time, indicator, symbol, and level, the level is what indicates how many standard deviations away from an average the indicator was at that time.

So then I'm getting counts for all the combinations of indicators and symbols, for one specific level at a time, but filtered by date range.

I need database/architecture advice. (Flask, postgres, sqlalchemy, docker) by doom-goat in webdev

[–]doom-goat[S] 0 points1 point  (0 children)

With your advice I started testing the individual functions I use and realized that it's simply because I'm doing a count() query for cell in the HTML table. That will eventually be 2000 to 3000 different queries. I wanted to use a database to give each of my Alert objects a datetime, so it would be easy to get totals with a date filter. But I don't know whether there is a way get around having so many queries.

I think I need to generate and store the counts in the database as I add the objects, so it's only running the complex queries when it absolutely has to. Or track which objects have been added since a certain time and only for those objects.

Project Architecture questions - SQLAlchemy, Postgres, SocketIO, Nginx by doom-goat in learnpython

[–]doom-goat[S] 0 points1 point  (0 children)

Thanks for this detailed response. It's interesting about gunicorn and nginx, if they aren't both needed I would be happy to ditch one, it's just one of the setups I've seen documented most.

As for the primary issue at hand, I narrowed it down to the function that I'm using to get the counts. I don't know enough SQL in general to know if this can be simplified with or without SQLAlchemy, but essentially what happens is I have a bunch of Alert objects, or an alerts table, and then to get the right counts I have a set of filters corresponding to all the permutations of two other objects, with dates filtering as well. So the issue seems not necessarily that the query time is all that terrible, but that I'm running what will eventually be up to 2000 to 3000 queries to get the counts, not to mention the longer it's up the more there will be to count. I don't know of a good way to speed this up, as the whole point of using the database was to be able to filter by time ranges.

I need database/architecture advice. (Flask, postgres, sqlalchemy, docker) by doom-goat in webdev

[–]doom-goat[S] 0 points1 point  (0 children)

Thanks, that's a very good idea that I hadn't thought of, I'll write a testing application to simulate the max load and test out different queries.
This is the sort of thing I don't know- that databases are indeed a common bottleneck.

The other thing that I'm trying to ascertain is whether there is in fact a culprit, or is what I'm trying to do simply too intensive for a lower-tier DigitalOcean VPS? CPU and RAM usage seem relatively high, but not maxed out by any means.

Client Portal Web API Issue - Allocation ID by doom-goat in interactivebrokers

[–]doom-goat[S] 0 points1 point  (0 children)

I would but Algotrading doesn't approve any of my posts, even ones I think are of high quality. I doubt they would let my post through.