"A Hail Mary pass is a very long forward pass in American football, typically made in desperation, with a very small chance of achieving a completion." - Wikipedia
One day I was nearly out of storage space on my cloud server so I searched for large files and folders to delete. I found an old instance of Nextcloud and without thinking a second, deleted the whole thing since it has been running on my home server for years now. Three seconds later I realized everything except some of my old notes were on my home server and that the deleted Nextcloud directory hadn't been backed up for the past 18 months.
The server uses the EXT4 file system. I quickly realized none of the typical file recovery solutions work because all files were completely unformatted plain text files so there are no headers to search. I didn't come up with this method myself. I'm merely re-posting something I was able to gather from other sources since it was very difficult to find a solution. I'm also assuming the reader knows how to use Linux.
Remount the file system read-only. It will prevent data loss and prevents you from dumping data on the file system that actually has the data you're trying to recover. That being said, you'll need enough storage space on another hard disk. Just by gauging my own experience you'll probably need roughly a third of your of your file system capacity. The cloud server has 40 GB of storage space and my data dump was just shy of 16 GB. How much you need, depends completely on what sort of data you're storing.
If the server where the deleted files are located on the cloud you'll probably need to mount a shared folder from your home server to be used as the target where the file system dump will be stored to. If you can work locally, great, it will be a lot faster.
My home connection is behind NAT so I couldn't just mount a shared NAS folder from my cloud server. Instead, I had to use SSH tunneling meaning I connect from my home NAS to my cloud server and allow the cloud server to use that connection to access my NAS sitting behind NAT.
Open a connection from the file server ("local server") to the server where the deleted files are located in ("remote server"). In the example below, I assume the SSH port of your remote server is 22 and port 43022 is free on the remote server. You will use that port as the tunnel back to your home server. On the local server run:
ssh -R 43022:localhost:22 user@domain.com
When you run the command above and log in you'll be on the remote server. On the remote server you'll use the SSH tunnel to connect back to local server. Run the command below on the remote server to mount a folder from the local server to the remote server. Change the folder to be mounted and the folder where the shared folder will be mounted to reflect your own paths.
sshfs -o allow_other -p 43022 localhost:/path/to/shared/folder/ a_folder_on_the_remote_server
If you get FUSE errors edit /etc/fuse.conf as root and uncomment allow_other.
Now what you need to do is you need to run the command below to dump the contents of the file system into a file. Make sure the target file is on another hard disk because otherwise you'll likely overwrite your data. Change paths again to reflect your own system. It will take anywhere from half an hour to days to complete depending on your file system size and connection speed.
Run as root: strings /dev/your_file_system/ > a_folder_on_the_remote_server/dump
You'll probably end up dealing with a very large file and GUI based solutions simply won't work. I reckon "less", "cat", "head" and "tail" are the best options but you'll need a lot of time and you'll probably need to manually stitch the files back together. You should also be aware if you've saved the file you're searching several times you'll also have several copies of it in the dump and you have to manually search for the last version. For example, one of the files I had deleted was a list of coffee blends I had tested at home. I had 16 copies of that file in the dump. Below is an excerpt from the dump when I searched for 'Mokaflor'. The number at the beginning of the line corresponds with the line number in the dump (and 36+ million was not even close to the last line).
62874031 Mokaflor farm tanzania: 8 62874035 Mokaflor L'espresso di fattoria: 7,5 62874043 Mokaflor farm vihre 62874058 Mokaflor farm tanzania: 8 62874062 Mokaflor L'espresso di fattoria: 7,5 62874070 Mokaflor farm vihre 177034533 Mokaflor farm tanzania: 8 177034537 Mokaflor L'espresso di fattoria: 7,5 177034545 Mokaflor farm vihre 177034559 Mokaflor farm tanzania: 8 177034563 Mokaflor L'espresso di fattoria: 7,5 177034571 Mokaflor farm vihre 228790065 Mokaflor farm tanzania: 8 228790069 Mokaflor L'espresso di fattoria: 7,5 228790077 Mokaflor farm vihre 228936027 Mokaflor farm tanzania: 8 228936031 Mokaflor L'espresso di fattoria: 7,5 228936039 Mokaflor farm vihre 228936046 Mokaflor colombia: 229047128 Mokaflor farm tanzania: 8 229047132 Mokaflor L'espresso di fattoria: 7,5 229047140 Mokaflor farm vihre 229047714 Mokaflor farm tanzania: 8 229047718 Mokaflor L'espresso di fattoria: 7,5 229047726 Mokaflor farm vihre 229095398 Mokaflor farm tanzania: 8 229095402 Mokaflor L'espresso di fattoria: 7,5 229095410 Mokaflor farm vihre 229095417 Mokaflor farm colombia: 7,5 229095418 Mokaflor rosso: 7,5 229138761 Mokaflor farm tanzania: 8 229138765 Mokaflor L'espresso di fattoria: 7,5 229138773 Mokaflor farm vihre 229138780 Mokaflor farm colombia: 7,5 229138781 Mokaflor rosso: 7,5 229138783 Mokaflor farm brazil colombia: 229147575 Mokaflor farm tanzania: 8 229147579 Mokaflor L'espresso di fattoria: 7,5 229147587 Mokaflor farm vihre 229147594 Mokaflor farm colombia: 7,5 233975314 Mokaflor farm tanzania: 8 233975318 Mokaflor L'espresso di fattoria: 7,5 233975326 Mokaflor farm vihre 233975341 Mokaflor farm tanzania: 8 233975345 Mokaflor L'espresso di fattoria: 7,5 233975353 Mokaflor farm vihre 297372721 Mokaflor farm tanzania: 8 297372725 Mokaflor L'espresso di fattoria: 7,5 343015046 Mokaflor farm tanzania: 8 343015050 Mokaflor L'espresso di fattoria: 7,5 359542075 Mokaflor farm tanzania: 8 359542079 Mokaflor L'espresso di fattoria: 7,5 359542087 Mokaflor farm vihre 359542094 Mokaflor farm colombia: 7,5 359542095 Mokaflor rosso: 7,5
Now you can use less or cat and search the file for data. If you're using less, when you've found what you're looking for use less' "mark" feature to mark the line where the data begins by hitting 'm' and then any other letter, for example 'ma'. Scroll to the end of data and press '|a', the colon on the last row changes to an exclamation mark. Now you can dump selected lines to a file by 'cat >> lessdata' (make sure to use >> instead of > so you don't overwrite your data!). Once you hit enter, marked lines are saved to file 'lessdata'. You will likely have to manually edit the lessdata file using the editor of your choice. Assuming a reasonable file size, GUI tools will work just fine. If you prefer you can also use 'cat -n dump | grep keyword' and then use 'tail' or 'head' to save n lines following the hit. I suppose this solution is best if have at least as many hits as I did. In this case you'd search for hits and then make script to tail/head n lines from hits. Then you'd just search which one is the most recent instance and manually edit the file.