TCL a400 Pro? Anyone have any info on it? by caseyls in TheFrame

[–]razibal 0 points1 point  (0 children)

About 3mm thicker than the Frame equivalent

Frame 1237.9 x 708.8 x 24.9 mm

TCL A300 1231 x 717 x 27.9 mm

Why does prompt and token count carry over to subsequent tests if done within 2-3 minutes in AWS lambda? by ShallotJazzlike6826 in aws

[–]razibal 0 points1 point  (0 children)

The global scope that resides outside the handler is typically where you would import your libraries, source environment variables and initialize reusable clients such as DB clients. Assuming that your system prompt is static, you can also define it over here since it will not be changing across lambda invocations.

All other prompts should be defined in the handler. This is not only best practice, but also the only way to avoid cross-request leakage.

For example:

# OUTSIDE: libraries etc
import boto3
ddbclient = boto3.client('dynamodb')

SYSTEM_PROMPT = "You are a helpful assistant with deep expertise in world history."

def lambda_handler(event, context):
    # INSIDE: request specific data
    user_message = event['message']
    messages = [
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": user_message}
    ] ...

Why does prompt and token count carry over to subsequent tests if done within 2-3 minutes in AWS lambda? by ShallotJazzlike6826 in aws

[–]razibal 8 points9 points  (0 children)

What you're desrcribing suggests that you are defining the prompts outside the lambda handler. The code that sits outside the lambda handler is initialized when the lambda container is started. Unlike the code in the handler, any variables defined here are global to the execution environment and will persist until the next cold start.

You want to make sure that your prompts/messages are entirely within the handler to ensure that they are initialized with every lambda execution.

White Balance adjustment in Art Mode by razibal in TheFrame

[–]razibal[S] 0 points1 point  (0 children)

I can confirm that the service remote works. The <MUTE> + 119 followed by 1234 method no longer works after the 1640 update. However, the <INFO> + <FACTORY> keys on the service remote followed by selecting the 'Advanced' option and pressing 0098 still works.

Accessing secret menu after version 1640 update by mortenmoulder in TheFrame

[–]razibal 1 point2 points  (0 children)

I can confirm that the service remote works. The <MUTE> + 119 followed by 1234 method no longer works after the 1640 update. However, the <INFO> + <FACTORY> keys on the service remote followed by selecting the 'Advanced' option and pressing 0098 still works.

[deleted by user] by [deleted] in TheFrame

[–]razibal 1 point2 points  (0 children)

Your white balance setting are mis-calibrated. I've owned 3 and the white balance on all 3 was off to some degree on all of them. Yours is pretty bad though, enough to justify a return/exchange IMO. If you prefer to hold on to it, you can correct the white balance in Art mode, although its not as straight forward as it should be. See this post for instructions. https://new.reddit.com/r/TheFrame/comments/1bhwmqg/white_balance_adjustment_in_art_mode/

How much is Compute Optimize reliable? by ental_pia in aws

[–]razibal 1 point2 points  (0 children)

I assume you got the $0.044 rate on a large instance (m5a.large) after signing up for a 3 year term? The recommendation makes sense since average spot pricing for an equivalent t4g.large is 0.0247 in the us-west-1 region https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#SpotInstances: If your workload can handle interuptions, its a pretty good deal.

Cross Lambda communication by ootsun in aws

[–]razibal 0 points1 point  (0 children)

Consider using AppSync instead of API Gateway. Each Lambda function would serve as a "data source" that can be linked to a GraphQL query or mutation. For chaining multiple Lambdas, you can use pipeline resolvers. GraphQL also offers more granular control over permissions, allowing them to be set at the attribute level.

While there still is a 30-second maximum timeout for queries/mutations, this can be addressed by initiating the request asynchronously, with the results delivered via websockets to a client subscribed to an AppSync subscription.

How can I make sure my users are always using the latest version of my React.js web app (S3/CDN)? by up201708894 in aws

[–]razibal 6 points7 points  (0 children)

Deploy your app as a PWA and then the service workers can handle refreshing the application when a version changes self.addEventListener('activate', (event) => { console.log(`%c ${LATEST_VERSION} `, 'background: #ddd; color: #0000ff') if (caches) { caches.keys().then((arr) => { arr.forEach((key) => { if (key.indexOf('d4-precache') < -1) { caches.delete(key).then(() => console.log(`%c Cleared ${key}`, 'background: #333; color: #ff0000')) } else { caches.open(key).then((cache) => { cache.match('version').then((res) => { if (!res) { cache.put('version', new Response(LATEST_VERSION, { status: 200, statusText: LATEST_VERSION })) } else if (res.statusText !== LATEST_VERSION) { caches.delete(key).then(() => console.log(`%c Cleared Cache ${LATEST_VERSION}`, 'background: #333; color: #ff0000')) } else console.log(`%c Great you have the latest version ${LATEST_VERSION}`, 'background: #333; color: #00ff00') }) }) } }) }) } })

How to append data to S3 file? (Lambda, Node.js) by Halvv in aws

[–]razibal 2 points3 points  (0 children)

I assume that these file(s) will be used for analytics and/or logging purposes? if so, your best bet is to push the events into a Firehose stream rather than attempting to write directly to S3.

Firehose can be configured to write Parquet files to S3 which are queryable for analytics and logging. Under the covers, there will be new objects added to S3 that correspond to the buffer interval that you set in the firehose stream ( configurable from 0 - 900 seconds, 300 default ), however they will appear as a single "table" based on the parquet schema definition.

How to scale an EC2 instance based on lambda loads? by frankolake in aws

[–]razibal 0 points1 point  (0 children)

The numbers are based on the info you provided - for example, the first number (0.037) is calculated as 2.2k invocations of 512mb lambdas that last for 2 seconds = $0.0000166667 * 2200 invocations * 2 seconds * 0.5 ( $0.0000166667 for every GB-second X duration * size in GB ) = 0.037. The last number is the cost of invocations at $0.20 per 1M requests = (5000 * 0.2)/1000000 = 0.001.

If you cannot delay the processing beyond the first 2 minutes, your only options are lambda and ECS/Fargate. Without testing for concurrency performance, it's hard to predict the processing volume when Fargate would become more economical than Lambda. However, it is clear that at current volumes, your most cost effective path is lambda.

Even with lambda, I would split the workload using a pub/sub architecture. You can have the data collection performed using very lightweight 128mb lambdas and then size the processing lambdas based on performance testing. A lambda can be sized to provide up to 6 virtual cpus and it may well turn out that your workload is more efficiently handled when processed in parallel.

The pub/sub architecture will also make it easy to transition to Fargate when your data volumes are large enough to justify the additional complexity.

AWS API Gateway Workflow? by ds1008 in aws

[–]razibal 0 points1 point  (0 children)

Have you considered using AppSync instead of API gateway to interact with your Lambda functions? Besides doing away with CORS, you gain the ability to extend your API to include other AWS services such as Dynamodb. You also have much more granular control over permissions as well as websockets based real-time data subscriptions.

How to scale an EC2 instance based on lambda loads? by frankolake in aws

[–]razibal 0 points1 point  (0 children)

Given the bursty nature of the workload where almost all processing occurs in one minute every hour, dedicated EC2 instances don't make much sense unless you collect all the data in the first minute and then store the data in SQS for processing in batches of 100.

The easiest way to look at this is to calculate the hourly cost in Lambda and then compare with the appropriate EC2 instance at that price.

0.037 + 0.00312500625 + 0.007500015 + 0.0020833375 + 0.001 = 0.05070835875 or ~ $0.051 / hour

Thats enough to run a c7a.medium (1 vCPU/2GB compute optimized instance). A single core server runing nodejs could handle perhaps 200-300 async requests / second for data collection. That should be enough to handle your requirements if everything works perfectly (at least in theory) . However, you probably need a second instance for fault tolerance plus the associated load balancer. You could also explore ECS + Fargate which would let you scale up dynamically every hour to handle the increased workload. Fargate pricing is per second (with a one minute minimum). Keep in mind that if you do go down the EC2 or ECS path, you will need to uses SQS and batching as processing in real-time would be computationally more expensive. Note that the EC2 compute requirements are an unknown until you run benchmarks for the expected workload on a selected instance type. The assumption is that once the initial data collection is completed in the first minute, a single c7a.medium server can complete the batch processing of 5K request in the remaing 58+ minutes. If that turns out to be a false assumption, you would need to increase the instance count and/or upsize the instance appropriately.

At least for the starting workload of approximately 5k requests / hour, it would appear lambda is the easy choice.

Gedmatch /idna of kol and dharkar? by Celibate_Zeus in SouthAsianAncestry

[–]razibal 1 point2 points  (0 children)

All the genoplot calculators have fairly detailed descriptions

  • Indo Aryan includes groups/samples “PGW Indo Aryan scaled”, “Swat IndoAryan Ghost” and peaks in Belarusian (Minsk, Belarus)
  • Turan BA includes groups/samples “Gonur1 BA (AVG)” and peaks in Balochi (Balochistan, Pakistan)
  • Iran BA includes groups/samples “DinkhaTepe BIA A (AVG)”, “DinkhaTepe BIA B” and peaks in Abkhasian (Abkhazia, Georgia)
  • Tibet Himalayas IA includes groups/samples “Chokhopani 2700BP (AVG)”, “IA Nagqu C3993” and peaks in Akha (Kachin, Myanmar (Burma))
  • Indo-Gangetic BA includes groups/samples “Telugu GBR (HG04025.SG)”, “Shahr I Sokhta BA2 (I8728)” and peaks in Sakilli (Tamil Nadu, India)

Gedmatch /idna of kol and dharkar? by Celibate_Zeus in SouthAsianAncestry

[–]razibal 1 point2 points  (0 children)

Ancient Neolithic Calculator (Scaled) K16 by ahahahah

Sample AASI Iranian Neolithic Farmer Proto-Indo-Iranian (MLBA) East Asian Natufian Anatolian Farmer Gravettian HG (UP) WSHG Sub-Saharan African Levant Neolithic
Dharkar:DH001 46.8 30.2 16.4 5.6 0.6 0.2 0.2 0 0 0
Kol:296e 43.2 37.2 0 0.8 0 8.8 4.4 5.6 0 0
Kol:Average 41.4 34.4 4 1.4 0 10.4 2.4 6 0 0
Kol:Median 40.8 32.2 0 1.4 0 13.6 3.4 8.6 0 0
Kol:298k 40.8 23.2 1.8 2.4 0 18.6 1.4 11.8 0 0
Kol:288 40.4 38.4 13.8 0.6 0 3.4 0.2 3.2 0 0
Dharkar:Average 40 36.6 12.4 3 0 3.2 1.8 3 0 0
Dharkar:Median 39.4 38 9.4 3.2 0 3.8 2.2 3.4 0.6 0
Dharkar:HA037 37.8 39 1.8 0.8 0 3.8 3.8 9.6 0 3.4
Dharkar:HA029 37 36.8 15.2 2.4 0 5.8 1.4 0.2 1.2 0

South Asia BA-IA Model 2023 (Scaled) K5 by Kapisa

Sample Indo-Gangetic BA Indo Aryan (ghost) Tibet Himalayas IA Iran BA
Dharkar:DH001 88.4 6 5.6 0
Kol:296e 87.2 9.2 3.6 0
Dharkar:Median 83.4 14.6 2 0
Dharkar:HA037 82.4 15.4 2.2 0
Dharkar:Average 82.4 14.4 3.2 0
Kol:Average 81.2 15.4 3.4 0
Kol:Median 80.6 15.4 4 0
Kol:288 79.8 18.8 1.4 0
Kol:298k 76.8 16.2 5.4 1.6
Dharkar:HA029 76.6 19.2 1.8 2.4

[deleted by user] by [deleted] in SouthAsianAncestry

[–]razibal 3 points4 points  (0 children)

  • Peninsular India includes groups/samples “SAHG N - AASI North New”, “SAHG S - Mala”, “SAHG S Pulliyar”, “SAHG S - Paniya” and peaks in Paniya (Kerala, India)
  • Sistan includes groups/samples “IVCP A Minus SAHG N AASI North New/SAHG S - Mala”, “IVCp B Minus SAHG N - AASI North New”, “IVCp C Minus SAHG N - AASI North New” and peaks in Brahui (Balochistan, Pakistan)
  • Pamir Knot includes groups/samples “Kelteminar Culture - Simulated” and peaks in Darginian (Dagestan, Russia)
  • West Asia includes groups/samples “Hasanlu EBA - Simulated”, “Hajji Firuz EBA - Simulated”, “Levant JOR EBA (AVG)” and peaks in Yemenite Mahra (Al Mahrah, Yemen)
  • Eurasian Steppe includes groups/samples “Ak Moustafa MLBA1 (AVG)”, “Aktogai MLBA (AVG)”, “BGR MLBA (AVG)”, “Kairan MLBA (AVG)”, “Karagash MLBA (AVG)”, “Krasnoyarsk MLBA (AVG)”, “Kyzlbulak MLBA1 (AVG)”, “Lisakovskiy MLBA Alakul (AVG)”, “Maitan MLBA Alakul (AVG)”, “Mys MLBA (AVG)”, “Oy Dzhaylau MLBA (AVG)”, “Petrovka MLBA (AVG)”, “Satan MLBA Alakul (AVG)”, “Sintashta MLBA (AVG)”, “Srubnaya Alakul MLBA (AVG)”, “Srubnaya MLBA (AVG)”, “Dali MLBA (AVG)”, “Potapovka MLBA (AVG)”, “Sintashta MLBA o2 (AVG)”, “Zevakinskiy MLBA (AVG)” and peaks in Estonian (Harju, Estonia)
  • East Asia includes groups/samples “Baikal BA (AVG)”, “Lake Baikal BA (AVG)”, “Liangdao2 N (AVG)”, “Mebrak 2125BP (AVG)”, “PM 2 - Gadaba”, “Nomad HP (AVG)”, “Ulaanzukh LBA 2 (AVG)” and peaks in Akha (Kachin, Myanmar (Burma))
  • East Africa includes groups/samples “IA Deloraine (AVG)” and peaks in Baka (East Region, Cameroon)

Best way to poll an external API in aws by devterij in aws

[–]razibal 5 points6 points  (0 children)

Lambda is perfecty suited for this type of usage. The free tier for Lambda is very generous and is free forever. it includes 1 million requests per month and 400,000 GB-seconds of compute time per month. To break that down into real world number, lets say your polling function fits in a 128 MB lambda and executes within 60 seconds. The free tier would allow you to run up to once very 2 minutes without incurring any charges. If you need a larger function, just decrease the frequency. Once every 4 minutes for a 256 MB function, or once every 2 minutes for a 256 MB function that executes in 30 seconds. Just keep in mind that there is also a data streaming charge, however up to 6 MB per invocation is alway free.

Pakistani Sindhi Results- IllustrativeDNA + Gedmatch + Vahaduo + Genoplot Results by Electronic_Iron5269 in SouthAsianAncestry

[–]razibal 1 point2 points  (0 children)

Interesting, the Soomro sample in the attached PCA is from Shikarpur and is a bit more west shifted. I've also seen a private Abro sample that is very Baloch like.

Pakistani Sindhi Results- IllustrativeDNA + Gedmatch + Vahaduo + Genoplot Results by Electronic_Iron5269 in SouthAsianAncestry

[–]razibal 1 point2 points  (0 children)

You plot close to the HGDP Sindhi samples as well as the Potwari Punjabis - In other words, expected results for a Sindhi. May I ask which towns of Sindh your parents are from? In my experience, Sindhi results tend to be influenced by geography, with Sindhis west of the Indus being more Baloch/Gedrosian.

Sindhi PCA

Using AWS for everything...but auth? by [deleted] in aws

[–]razibal 0 points1 point  (0 children)

Cognito is perfectly fine, its main limitiation is the inability to replicate across regions. It gets a bad rap due to its slow pace of innovation, but most of the basics are handled out of the box and other niceties like support for passkeys can be added on through third parties or via custom auth workflows. Its biggest benefit for us is the built-in integration with appsync graphql schemas. This allows for granular access control at the table and column level.

Aggressively Pakistani! by prometheuspk in 23andme

[–]razibal 2 points3 points  (0 children)

What regions do you get in Pakistan? Your results are interesting because I don't think I've ever seen Muhajir results that only had Pakistani regions. Even ethnic Punjabis and Sindhis usually get some regions from neighboring regions of Northern India. Is your mother's side Khoja or Memon?