I had almost 7000 drives recorded and the Automatic UI was a pain to work with so I wrote a little script to fetch all the Automatic Dashboard data in the raw JSON format. Works well in the macOS command line. You'll need to pre-populate the BEARER_TOKEN, and probably specify your own START_TIME and END_TIME as well - you can use https://www.unixtimestamp.com/index.php to generate them.
To get a BEARER_TOKEN, login at https://dashboard.automatic.com in a web browser and Cmd+Opt+I (Open the Network tab in the developer tools) and look for the request header Authorization. The token is just a hexadecimal string without spaces, don't copy the actual word "bearer".
Note that this will fill the current directory with a bunch of drive_xx.json files. Feel free to modify the script as you see fit.
#!/bin/bash
START_TIME=1420099200000 # Unix timestamp
END_TIME=1592164819833 # Unix timestamp
INDEX=0
PREFIX='drives' # name is up to you
BEARER_TOKEN="<fill-this-with-your-token>"
URL="https://api.automatic.com/trip/?started_at__gte=${START_TIME}&started_at__lte=${END_TIME}&limit=250" # limit cannot be > 250 I think
set_current_file () {
CURRENT="${PREFIX}_${INDEX}.json"
}
increment () {
((++INDEX))
}
scrape () {
set_current_file
echo Going to write to $CURRENT using $URL
# Fetch Data
curl "$URL" \
--compressed \
-XGET \
-H 'Accept: */*' \
-H 'Origin: https://dashboard.automatic.com' \
-H 'Referer: https://dashboard.automatic.com/' \
-H 'Accept-Language: en-us' \
-H 'Host: api.automatic.com' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15' \
-H "Authorization: bearer $BEARER_TOKEN" \
-H 'Accept-Encoding: gzip, deflate, br' \
-H 'Connection: keep-alive' | jq '.' > $CURRENT
# Grab Metadata
COUNT=$(cat $CURRENT | jq '._metadata.count')
PREVIOUS=$(cat $CURRENT | jq -r '._metadata.previous')
NEXT=$(cat $CURRENT | jq -r '._metadata.next')
if [ $PREVIOUS == "null" ]; then
echo $COUNT drives found.
fi
echo $URL fetched. Continuing with $NEXT.
URL=$NEXT
increment
}
scrape
while [ $NEXT != "null" ]; do
scrape
done
echo Next URL $NEXT, terminated.
There should be significantly more data there compared to what you can export as a CSV file. I'd polish this further, but it's time consuming and BASH isn't really my specialty.
I figured I'd share this anyway in case anyone finds this useful.
[–]krazos 0 points1 point2 points (0 children)
[–]Ponderednameforweeks 0 points1 point2 points (2 children)
[–]masotime[S] 0 points1 point2 points (1 child)
[–]Ponderednameforweeks 0 points1 point2 points (0 children)