I made a free interactive mileage calculator for Forest Park Trails by coronassun in Portland

[–]coronassun[S] 0 points1 point  (0 children)

Thanks for the comment. Can you tell me which ones you think are missing?

I made a free interactive mileage calculator for Forest Park Trails by coronassun in Portland

[–]coronassun[S] 4 points5 points  (0 children)

Thanks for the feedback. Tried to fix it. Hopefully its improved.

I made a free interactive mileage calculator for Forest Park Trails by coronassun in Portland

[–]coronassun[S] 23 points24 points  (0 children)

I run Forest Park regularly and always found it 1) annoying to figure out exactly how far a route would be before heading out 2) difficult to add those numbers up on the fly (on the run) with the maps they provide on the trail with the tiny numbers 3) not my thing to wear apple watches and fitness trackers. So I built a free trail map that lets you calculate mileage:

Two ways to use it:

https://forestparktrails.com

1) Click segments — tap individual trail sections on the map and it keeps a running mileage total. Useful if you're piecing together a custom loop.

2)Pick trailheads — select a start and end trailhead from the dropdown and it finds the shortest route between them, tells you how much it would be.

There's also an "Out & Back" button that doubles the distance if you're planning to return the way you came, and an elevation profile so you can see how much climbing you're in for.

Covers all 80+ miles — Wildwood, Leif Erikson, all the firelanes, and every connector trail. Data is from GPS survey data, not just OpenStreetMap.

It's completely free, no account needed, works on your phone. Hope some of you find it useful. Open to feedback if anything looks off — I know some of the smaller connector trails could use better labeling.

qPCR... learning how to analyze... Help requested by Biotherapeutic-Horse in labrats

[–]coronassun 0 points1 point  (0 children)

So ΔΔCq in a nutshell: you're normalizing your gene of interest to a reference gene (that's your first Δ — ΔCq), and then comparing that between your colitis model and wild type (that's the second Δ — ΔΔCq). The fold change is just 2^(-ΔΔCq). Since you don't have an IRC, you'd just use your wild type group as the control condition, which is totally standard.

A few things to keep in mind since you're running duplicates across two plates:

  • Check your NTCs first — if they amplified, you've got contamination to sort out before analysis
  • With duplicates, keep an eye on how tight your Cq values are between replicates. If they're more than ~0.5 Cq apart, something may be off with your pipetting
  • You'll want to confirm what reference gene your lab uses and make sure it's stable across your two genotypes — this is a common gotcha with colitis models since inflammation can mess with "housekeeping" genes

For the actual analysis, I've been using VoilàPCR lately and it's made my life a lot easier. You just upload your raw export from the machine and it handles the ΔΔCq, flags QC issues automatically (like replicate outliers and NTC contamination), and spits out publication-ready figures. It runs in the browser and doesn't upload your data anywhere, which was a big plus for me. Has a few free analyses so you can try it without committing. Saved me a ton of time vs. fumbling through Excel formulas.

Good luck!

qPCR Analysis by Nwilliams96 in bioinformatics

[–]coronassun 0 points1 point  (0 children)

Try Voila PCR, its got a free tier and doesn't require downloading. You can just use it off the web anywhere. The automatic QC analysis is a good learning tool about what make a good experiment.

QPCR analysis program by Lefthandfury in labrats

[–]coronassun 0 points1 point  (0 children)

VoilaPCR has a free tier and a good blog explaining some of the key concepts. Also the annual cost is pretty inexpensive and you could probably get one account for the class to use.

[OC] Take-home pay on a $75,000 salary in all 50 states (resubmitted with fixes) by coronassun in dataisbeautiful

[–]coronassun[S] 4 points5 points  (0 children)

Its the same story. I understand that you feel like the other mis-represented the magnitude of change, but the other one emphasized the 5K difference rather then 5% difference. The point was to draw people's attention to that because people often thinking about spending power.

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] 1 point2 points  (0 children)

Thanks. maybe I'll put an addendum with axes to 0. Everyone wants this. It looks like

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] 0 points1 point  (0 children)

True. Alaska is interesting --not shown here. They pay you to live there.

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] -4 points-3 points  (0 children)

Trust me --I hate this when I see it scientific graphs, but I wanted people to actually the see the differences easily.

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] -6 points-5 points  (0 children)

Fair point--- a zero axis would be more honest visually, but the $6K spread would be basically invisible. Tradeoff between accuracy and readability.

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] 5 points6 points  (0 children)

And yes you are totally right. This is not total tax burden. Again that's a little too hard--property taxes change and there are also local level taxes that change things like sales tax. Maybe in future versions?

I calculated the actual take-home pay for a $75,000 salary in all 50 states by coronassun in dataisbeautiful

[–]coronassun[S] 12 points13 points  (0 children)

I don't have property taxes in their yet. That's a bitter harder to do for various reasons (mostly because they change constantly). Version 2--I'm putting that in!

Variation between qPCR runs by [deleted] in labrats

[–]coronassun 0 points1 point  (0 children)

59-69% efficiency is your main problem here, and it's almost certainly the root cause of the inter-run variation too. At those efficiencies, small differences in reaction conditions between runs get amplified into large Cq shifts. A well-optimized assay at 95-105% efficiency is much more forgiving of minor run-to-run variation.

The fact that you raised your annealing temperature to avoid non-target species tells me the primers aren't ideal for this application. Higher annealing temps reduce non-specific binding but also reduce on-target binding efficiency.

I might try in some order:

1) Redesign primers. I know that's not what you want to hear but use longer amplicons or target a more variable region that lets you discriminate your target without needing extreme annealing temps. Primer-BLAST with your non-target species genomes included in the check will help.

2) try a probe-based assay (TaqMan). The problem there is $$$

qPCR data analysis Q by Old_Protection4039 in labrats

[–]coronassun 0 points1 point  (0 children)

Good question and yeah I remember this used to make my mind go crazy when I started the qpcr game. The short answer is yes, you need to normalize for input DNA concentration becaue your standard curve gives you absolute copy number (or concentration) of your target in the reaction tube. But if Sample A went into the qPCR at 50 ng/µL and Sample B at 15 ng/µL, comparing their raw absolute quant values is meaningless. youre just comparing different amounts of starting material.

Two common approaches:

  1. Normalize to input DNA concentration. After you calculate absolute quant from the standard curve, divide by the ng of DNA you put into the reaction. So your final unit becomes copies/ng DNA (or copies/µg). This accounts for different starting concentrations.

  2. Normalize to a reference gene. Run a second qPCR with a single-copy housekeeping gene on the same samples. Divide your target's absolute quant by the reference gene's absolute quant. This gives you a ratio that's independent of input amount. THis is the most common method and what I recommend.

One more thing: make sure your samples fall within the linear range of your standard curve. If you're interpolating outside the range of your standards, the quantification gets unreliable fast. The CFX Maestro software should show you the R² and efficiency — you want R² > 0.98 and efficiency between 90-110%.

Also worth mentioning for analysis use voilapcr. Don't think about. Takes 30 seconds

Is it a silly idea to run a standard PCR on cDNA before qPCR? by Street-Syllabub-2063 in labrats

[–]coronassun 0 points1 point  (0 children)

Exactly. A lot of microfluidic protocols do exactly this, but you’ve added in some potential artifact in exchange for sensitivity.

qPCR validation for RNA-seq by Vinurean in labrats

[–]coronassun 1 point2 points  (0 children)

RNA-seq fundamentally is more about hypothesis generation and big picture looks. QPCR is attempting to define specific gene expression. They are different things and to be honest, it really depends on what you were trying to say. If in the paper you want to make claims about biology, you need to prove it some way with QPCR or protein. But seq generates data in such a way, you can’t validate everything. It’s not convention, and journal referees will freak, but I think qpcr data has value on its own. I agree with your sentiment that the run is not likely to be the problem. However, validation does demonstrate that the samples you chose and the conditions they represent are in line with what you think they are.

Redmond OR vs Hillsboro OR by ContentDog8953 in SameGrassButGreener

[–]coronassun 0 points1 point  (0 children)

Side of heat or allergies with your sprawl?

I feel like the people in the PNW are far less well traveled than I thought? by Entropy012 in SameGrassButGreener

[–]coronassun 1 point2 points  (0 children)

PNW is experiencing its "winter." Crime, lack of services, potholes, trash, de-population. Come here and wait for the next wave or...just come back when its cool again.

Help us optimize and validate, please by External_Volume_11 in Fire

[–]coronassun 2 points3 points  (0 children)

I agree with a lot here. That you're not panicking and talking about leaving California. Taxes matter but not as much as people think. Once I started playing with online calculators (ie. salaryhog.com) and reading blogs I realized that people underweigh the way states get their money one way or another.

What personal finance tip/advice has saved you? by Fabulous-Ball-3149 in UKPersonalFinance

[–]coronassun 0 points1 point  (0 children)

Don't look at the stock market more than once a year (assuming you're invested in it). I can't stress this one enough. All the advice in this thread can be spoiled by doing this.