10 steps to download/archive NotePD lists
This is a technical list for fairly technical people. The code examples were run on a Mac.
Your public lists can be downloaded without the key. Your private lists will need a key.
I have been a professional software developer for 20 years but ChatGPT wrote these files. I just knew the right questions to ask.
- Press escape (this leaves insert mode)
- type ":wq" you will see this at the bottom of your terminal window. This means "write and quit"
1. Right click to open the console (you can also click F12)
2. navigate to your profile
3. scroll down on the page and watch the network tab until you see page=X
4. click on that to see the Request URL:
5. paste your public URL into a browser. You will see the data:
You can see this one as well:
https://api.notepd.com/v1/posts/details/search/chris407x/0/?page=2
6. For Private Ideas you will need the auth token
Get this by logging in.
7. Use this script to download all your public lists
This is a shell script. On Mac, open your Terminal application.
Type:
vi fetch_notepd.sh or however you want to name your file.
This will open a new file. If you have not used Vim before, you will need to press "i" to get into insert mode. After you have pasted in your code you need to:
Edit this text in a text editor to replace the values with values that make sense for your system:
Change these values for your user. BASE_URL is the NotePd link that you found earlier
BASE_URL="https://api.notepd.com/v1/posts/details/search/chris407x/0/?page=";
DIRECTORY="/Users/MyUser/Documents/NotePd_2024-07-20"
#!/bin/bash
# Variables
# public
BASE_URL="https://api.notepd.com/v1/posts/details/search/chris407x/0/?page=";
DIRECTORY="/Users/MyUser/Documents/NotePd_2024-07-20"
FILENAME_PREFIX="notepd_public_"
PAGE=1
# Ensure the directory exists
mkdir -p "$DIRECTORY"
# Function to fetch and save data
fetch_and_save_data() {
local url="$1"
local filepath="$2"
# Echo the curl command
echo "Executing: curl -s -o \"$filepath\" -w \"%{http_code}\" \"$url\""
response=$(curl -s -o "$filepath" -w "%{http_code}" "$url")
if [ "$response" -eq 404 ]; then
echo "Page $PAGE returned a 404. Stopping."
rm "$filepath" # Remove the empty file created
exit 0
fi
echo "Saved $filepath"
}
# Loop to fetch pages until a 404 response is received
while true; do
URL="${BASE_URL}${PAGE}"
FILE_PATH="${DIRECTORY}/${FILENAME_PREFIX}${PAGE}.json"
fetch_and_save_data "$URL" "$FILE_PATH"
PAGE=$((PAGE + 1))
done
8. run the file by typing "sh fetch_notepd.sh "
9. This will download all of your notePD json files into a directory that you set above.
To get to the directory that you set in your terminal type:
cd /Users/MyUser/Documents/NotePd_2024-07-20
then:
ls
this will list your files:
10. right now these are JSON scripts with a lot of meta data. This script will turn them into Markdown. which is easier to read in any text editor and can be imported into Obsidian.
Edit these values and follow the directions for naming and running the bash script:
# Define the source and destination directories
SOURCE_DIR="/Users/MyUser/Documents/NotePd_2024-07-20"
DEST_DIR="/Users/MyUser/Documents/Obsidian\ Vault/NotePd"
Make and run this script as above. I suggest copyJsonToMarkdown.sh:
#!/bin/bash
# Define the source and destination directories
SOURCE_DIR="/Users/MyUser/Documents/NotePd_2024-07-20"
DEST_DIR="/Users/MyUser/Documents/Obsidian\ Vault/NotePd"
# Create the destination directory if it doesn't exist
mkdir -p "$DEST_DIR"
# Function to sanitize filenames by replacing or removing invalid characters
sanitize_filename() {
echo "$1" | sed 's/[\/:*?"<>| ]/_/g'
}
# Iterate over each JSON file in the source directory
for json_file in "$SOURCE_DIR"/*.json; do
# Parse the JSON file and extract the results array
results=$(jq -c '.results[]' "$json_file" 2>/dev/null)
# Check if jq command was successful
if [ $? -ne 0 ]; then
echo "Warning: Failed to parse $json_file. Skipping."
continue
fi
# Iterate over each result in the results array
echo "$results" | while IFS= read -r result; do
# Extract the necessary fields from each result
title=$(echo "$result" | jq -r '.title')
updated=$(echo "$result" | jq -r '.updated' | cut -d'T' -f1)
main_image=$(echo "$result" | jq -r '.image // empty')
description=$(echo "$result" | jq -r '.description' | sed 's/<[^>]*>//g')
# Generate the filename
summary=$(sanitize_filename "$(echo "$title" | cut -c1-50)")
filename="${updated}_${summary}"
# Add "private" to the filename if the original file name contains the word "private"
if [[ "$json_file" == *"private"* ]]; then
filename="${filename}_private"
fi
filename="${filename}.md"
# Ensure the directory exists before writing the file
dir_path=$(dirname "$DEST_DIR/$filename")
mkdir -p "$dir_path"
# Start writing the markdown file
{
echo "# ${title}"
echo "updated Date: ${updated}"
if [ -n "$main_image" ]; then
echo "![main_image](${main_image})"
fi
echo "${description}"
echo ""
echo "## Post Ideas"
} > "$DEST_DIR/$filename"
# Iterate over each post_idea in the post_ideas array
post_ideas=$(echo "$result" | jq -c '.post_ideas[]')
echo "$post_ideas" | while IFS= read -r idea; do
idea_text=$(echo "$idea" | jq -r '.idea')
idea_image=$(echo "$idea" | jq -r '.image // empty')
explanation=$(echo "$idea" | jq -r '.explanation' | sed 's/<[^>]*>//g')
{
echo "1.## ${idea_text}"
if [ -n "$idea_image" ]; then
echo "![idea_image](${idea_image})"
fi
echo "${explanation}"
echo ""
} >> "$DEST_DIR/$filename"
# echo "$DEST_DIR/$filename"
done
done
done
echo "Markdown files have been created in $DEST_DIR."
11. The Markdown files will look like this:
2022-06-26_Song_titles_for_space_psychedelic_rock_album
# Song titles for space/psychedelic rock album
updated Date: 2022-06-26
Some of these are very overused....
## Post Ideas
1.## Mobius Trip
1.## Beyond presence
1.## Star flight
1.## Dark matter vortex
1.## Hyperdrive
1.## Infinite folded space
1.## Parallel Universe hyperdrive
1.## Intersmoke
1.## Solar wing
1.## Inverted gravity
12. Bonus: file for downloading private ideas
Follow the instructions above to retrieve your Authorization token. You will make another downloading bash script. I saved these files in the same directory.
call the file fetch_notepd_private.sh or similar.
replace these values:
# Variables
BASE_URL="https://api.notepd.com/v1/posts/details/search/chris407x/1/?page=";
DIRECTORY="/Users/MyUser/Documents/NotePd_2024-07-20"
FILENAME_PREFIX="notepd_private_"
PAGE=1
AUTH_TOKEN="YOUR_TOKEN_HERE_JIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNzI0MDY5MDAwLCJpYXQiOjE3MjE0NzcwMDAsImp0aSI6Ijg1MTk5Nzk2OTRlYjQzMmI5NDRjMjk3ODcxYzU4M2Y2IiwidXNlcl9pZCI6MTQ0N30.R6ch4947r0VFOFgyVnBjuiJ5BaoDdBSS_QhgCtJzkns"
In this script:
#!/bin/bash
# Variables
BASE_URL="https://api.notepd.com/v1/posts/details/search/chris407x/1/?page=";
DIRECTORY="/Users/MyUser/Documents/NotePd_2024-07-20"
FILENAME_PREFIX="notepd_private_"
PAGE=1
AUTH_TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNzI0MDY5MDAwLCJpYXQiOjE3MjE0NzcwMDAsImp0aSI6Ijg1MTk5Nzk2OTRlYjQzMmI5NDRjMjk3ODcxYzU4M2Y2IiwidXNlcl9pZCI6MTQ0N30.R6ch4947r0VFOFgyVnBjuiJ5BaoDdBSS_QhgCtJzkns"
# Ensure the directory exists
mkdir -p "$DIRECTORY"
# Function to fetch and save data
fetch_and_save_data() {
local url="$1"
local filepath="$2"
# Echo the curl command
echo "Executing: curl -s -o \"$filepath\" -w \"%{http_code}\" -H \"Authorization: Token $AUTH_TOKEN\" \"$url\""
response=$(curl -s -o "$filepath" -w "%{http_code}" -H "Authorization: Token $AUTH_TOKEN" "$url")
# Echo the response
echo "Response: $response"
if [ "$response" -eq 404 ]; then
echo "Page $PAGE returned a 404. Stopping."
rm "$filepath" # Remove the empty file created
exit 0
fi
echo "Saved $filepath"
}
# Loop to fetch pages until a 404 response is received
while true; do
URL="${BASE_URL}${PAGE}"
FILE_PATH="${DIRECTORY}/${FILENAME_PREFIX}${PAGE}.json"
fetch_and_save_data "$URL" "$FILE_PATH"
PAGE=$((PAGE + 1))
done
No comments.