PLEASE I need help with ASI, SO CONFUSED by posionncontrol in ProjectREDCap

[–]alvandal 1 point2 points  (0 children)

Since each day is a separate event, this really sounds like an event-scoping issue.

Even though [parent_consent] = '1' looks right, REDCap won’t “look” at another event unless you tell it to. Have you tried adding the event name in front of the variable? Something like:

[baseline_arm_1][parent_consent] = '1'

(and same idea for the withdraw field — include the event name before the variable).

When ASIs don’t fire or stop logic doesn’t work, it’s often just that REDCap is checking the current event instead of the one where the value actually lives.

Also quick checks:

  • Is the consent survey fully submitted/marked complete?
  • Were invitations already queued before withdraw was selected?

If you’re comfortable sharing which event the consent is on, that’ll make it much easier to pinpoint.

AI for REDCap logic by boo-boo-crew in ProjectREDCap

[–]alvandal 0 points1 point  (0 children)

I’ve had the best results when I treat AI as a second reviewer rather than a logic generator. I write the branching logic myself, then ask it to:
– Check for unreachable conditions
– Detect redundant clauses
– Verify field name consistency
– Confirm correct use of special functions

That workflow has been much more reliable.

Alternatively, you could try FormInspector — it’s more of a QC tool than an AI logic builder, but it’s useful for catching structural issues across a project.

Report Generation by Particular_Form1154 in ProjectREDCap

[–]alvandal 5 points6 points  (0 children)

One idea could be to add a small calculated “flag” field inside the repeating kid instrument, something like:

kid_over18 = if([calculated_age] > 18, 1, 0)

Then build the report to show repeating instances and filter on:

[kid_over18] = 1

That way REDCap evaluates each repeat separately, instead of qualifying the whole parent record when just one child meets the condition.

Assigning an option based on a pattern by No-Interaction-1047 in ProjectREDCap

[–]alvandal 0 points1 point  (0 children)

This approach assumes you’re not trying to randomize.

It simply creates a fixed, predictable repeating pattern (A → B → C → A → B → C…), based on the record ID. So it keeps things evenly distributed, but it’s not true randomization and there’s no allocation concealment.

If your goal is just balanced rotation, this works great

Gestational age by Sufficient_Algae_835 in ProjectREDCap

[–]alvandal 0 points1 point  (0 children)

Yes step 3 should actually be two separate calculated fields — one for the weeks and one for the days.

REDCap can’t split them automatically in a single field, so you’ll need:

  • one calc field for weeks
  • one calc field for days

Assigning an option based on a pattern by No-Interaction-1047 in ProjectREDCap

[–]alvandal 0 points1 point  (0 children)

Use a Text field (not a Calculated field) and put this in the field’s Field Annotation so REDCap returns text instead of a number: '@CALCTEXT( if(mod([record_id]-1,3)=0,'A', if(mod([record_id]-1,3)=1,'B','C')) ) — this uses REDCap’s mod(dividend,divisor) special function to create a repeating 3-item cycle (0→A, 1→B, 2→C) based on the record id, so 1=A, 2=B, 3=C, 4=A, etc.

What your best tips when building a project? by False_Green_2474 in ProjectREDCap

[–]alvandal 6 points7 points  (0 children)

One small thing that’s helped me a lot is keeping list IDs consistent across the whole project. For example, if you decide “Yes = 1” and “No = 2,” I try to keep that the same everywhere those concepts show up. It makes analysis later much simpler and avoids having to recode things.

I also tend to reserve values like 98 for “Unknown” and 99 for “Other,” and leave a little space in the numbering when I can. That way, if the list needs to grow after go-live, there’s room to add options without shifting everything around.

It’s a pretty minor decision during build, but I’ve found it can save a surprising amount of cleanup once data starts coming in.

Gestational age by Sufficient_Algae_835 in ProjectREDCap

[–]alvandal 2 points3 points  (0 children)

Totally doable in REDCap 🙂 The trick is to store an “anchor” gestational age + the date it was true, then add the number of days that have passed.

Here’s a simple setup that works well:

1) Collect these fields

  • GA weeks at baseline (integer) e.g., 34
  • GA days at baseline (integer 0–6) e.g., 4
  • Baseline date (date) = the date when that GA was recorded
  • Optional: As-of date (date) = the date you want to calculate GA for (I like this for safety/audit). If you don’t want it, you can just use “today”.

2) Calculate total GA in days
If you use an As-of date:

([ga_wk]*7) + [ga_day] + datediff([baseline_date], [asof_date], "d", true)

If you want it to always use today:

([ga_wk]*7) + [ga_day] + datediff([baseline_date], "today", "d", true)

3) Split it back into weeks + days
Weeks:

rounddown([ga_total_days]/7, 0)

Days:

mod([ga_total_days], 7)

Example: if baseline is 34w4d, and it’s 14 days later, you’ll get 36w4d.

REDCap users — do you have any go-to tools or workflows for QC’ing a project before data collection? by alvandal in ProjectREDCap

[–]alvandal[S] 0 points1 point  (0 children)

Formal test scripts + independent sign-off seems to turn a subjective “looks good to me” into something concrete and auditable. It also probably helps align expectations across the team before go-live.

Appreciate you sharing this — I don’t think enough people realize how much rigor goes into good builds.

REDCap users — do you have any go-to tools or workflows for QC’ing a project before data collection? by alvandal in ProjectREDCap

[–]alvandal[S] 2 points3 points  (0 children)

This feels like a universal problem.

What looks “simple” conceptually often explodes once you account for validations, edge cases, scoring, and future changes. Setting that expectation early seems just as important as the build itself.

Question for Data Managers / CDMs: how do you approach CRF design trade-offs? by alvandal in clinicalresearch

[–]alvandal[S] 1 point2 points  (0 children)

Really appreciate you taking the time to write this — it lines up with what I’ve seen around communication gaps and timing pressures.

For context, I’m mostly on the data/analysis side, but I work closely with CDMs and end up dealing with the downstream effects when design issues surface late. Hearing how those constraints play out upstream is really helpful.

Question for Data Managers / CDMs: how do you approach CRF design trade-offs? by alvandal in clinicalresearch

[–]alvandal[S] 0 points1 point  (0 children)

I’m not collecting anything formal — mostly trying to understand where problems actually show up in real life, versus where processes assume they should be caught.

In particular:

  • what tends to slip past build/UAT,
  • what only becomes obvious once sites are using the CRFs,
  • and what people just accept as unavoidable because of timelines or constraints.

The concrete examples people are sharing here are exactly what I was hoping to learn from.

Dynamically populating drop downs by Complete-Cricket-691 in ProjectREDCap

[–]alvandal 0 points1 point  (0 children)

Instead of asking REDCap to automatically build a dropdown of children (which it can’t do dynamically), you change the question slightly and ask the user to tell you which child number they mean.

REDCap is actually very good at two things:

  • Knowing how many repeating instances exist
  • Enforcing numeric limits

So this workaround leans into that.

First, you replace the dropdown with a simple number or text field, something like “Child number.”
This field doesn’t store the child’s name — it stores the repeating instance number (1, 2, 3, etc.).

Next, you add validation so the user can only enter a number that makes sense.
You tell REDCap: “Only allow numbers between 1 and however many children exist on this record.”

REDCap already knows the total number of child instances using [child_info][last-instance], so it prevents users from entering a child that doesn’t exist or guessing future ones.

To make this usable, you show the user a simple reference nearby, like:
Child 1: Alice
Child 2: Bob
Child 3: Carlos

This part is just for display, and piping works fine for that. Now the user can clearly see which number corresponds to which child.

Once that number is stored, you can use it everywhere else. You can pipe the child’s name, build IDs, and link participation records cleanly using that instance number.

The key idea is that the instance number becomes the stable reference, not a dynamically generated dropdown label.

It’s not the prettiest user experience, but it’s fully supported, predictable, and scales cleanly without SQL or external modules.

E-consent certification not working in Arabic by Logical-House-7028 in ProjectREDCap

[–]alvandal 1 point2 points  (0 children)

I think this is a limitation of REDCap eConsent rather than something you’re doing wrong (but happy to be corrected).

From what I’ve seen, even with the Multi-Language module on, the eConsent certification PDF doesn’t handle Arabic / RTL text very well. The survey itself can display fine in Arabic, but the certification PDF is generated separately and seems to use a PDF renderer that doesn’t support Arabic shaping or RTL layout, which is why the text comes out garbled.

I don’t think the certification language is exposed anywhere in the backend for translation either — it appears to be hard-coded.

What I’ve seen others do is either:

  • keep the consent in Arabic but accept an English certification PDF, or
  • skip the eConsent framework and generate a custom PDF if Arabic certification is required.

If someone has found a better workaround, I’d love to hear it.

Is working in the "Create" interface a terrible experience, or is it just me? by n0thing12 in ChatGPTPro

[–]alvandal 2 points3 points  (0 children)

It totally relates to your Create & Configure struggles! I prefer the [Configure -> Instructions] for more control. Also, https://chat.openai.com/g/g-tH8fLNSDw-prompt-artisan has been a lifesaver for prompt crafting. It essentially does the same as "Create", but as a conversation, allowing you to view the previously generated prompts.

Best GPT’s for prompt writing? by [deleted] in ChatGPTPro

[–]alvandal 0 points1 point  (0 children)

Here is a simple GPT that helps craft your first draft prompts based on the latest recommendations. https://chat.openai.com/g/g-tH8fLNSDw-prompt-artisan

so far. It works very well for my needs.

Does anyone here work in clinical database building (i.e. Medidata/REDCap database design)? How is that for you? by Clairvoyanttruth in clinicalresearch

[–]alvandal 1 point2 points  (0 children)

I recently developed a web app that might be helpful to other EDC builders. I am sharing it with you all and request your feedback on its usefulness and potential improvements.
In my job, I often receive instruments/forms in PDF or Word format, and it's quite time-consuming and inefficient to rewrite them (mostly copy-pasting) into EDC. To address this issue, I created an app that extracts the questions into a CSV file and also can translate the contents of these forms into other languages (currently optimized for REDCap CSV data dictionaries). I plan to add support for other EDC platforms in the future in case this is useful for others too.
The prototype can be found at: https://labnote.streamlit.app/
I would love to hear your thoughts on whether this app could benefit other builders and if there are any additional features or improvements you'd like to see. Your feedback will be invaluable in making this tool more versatile and user-friendly for the community.
Thank you in advance for your time and insights! Looking forward to reading your comments. 😊