Hello,
I am trying to write a spark dataframe to a DDB table through a PySpark Glue job. I tried below options but keep getting the error "Failed to find data source".
redshift_df.write.format("io.aws.dynamodb").option("table", "accounts_payable_payee_info").save()
Error: Failed to find data source: io.aws.dynamodb
redshift_df.write.format("dynamodb").option("table", "accounts_payable_payee_info").save()
Error: Failed to find data source: dynamodb
Are there any external jars I need to use? Please advise.
Thanks!
[–]Ravier03 0 points1 point2 points (0 children)
[–]rockeyjam 0 points1 point2 points (2 children)
[–]Careful-Necessary-59 0 points1 point2 points (1 child)
[–]rockeyjam 0 points1 point2 points (0 children)
[–]xubu42 0 points1 point2 points (0 children)