Using local AI models for the first time to save like 20 seconds of work
Posted 2026-04-08
Whenenver I get adocument—like mail and stuff—I scan it and upload it to Dropbox. When I do, I name it in the format date and month followed by a short description. For a February 13th dentist appointment invoice, I might have:
02_13-dentist_invoice.pdf
Naming those files, though, isn’t something I particularly enjoy doing. It’s not the coming up with the name so much as it is typing it out. Those underscores and dashes can get tricky. When I had a desktop scanner and was doing it on my mac it was one thing, but switching between the symbols and characters keyboard on iOS to get it just right is a challenge. There’s friction, and friction invites excuse to neglect.
In the back of my head, there’s been a project I could do that is having AI come up with the name. That feels like one of those things that AI should be capable at. And it would be a good chance to try out a local model. The idea of scanning all my important tax and medical documents and shipping them off the Anthropic made me uneasy.
One of the reasons I hadn’t moved this project forward is that I felt like running locals models Was Hard — tho I’m not really sure why. Then google released the Gemma 4 models for running locally, and I figured “if Google was releasing it, it couldn’t be that hard to get set up.”
Over the course of a few hours on Sunday morning I whipped something up.
Getting the model running wasn’t particularly difficult. I basically followed the instructions here. Some of the commands weren’t exactly right, or at least didn’t work out the gate, but Claude Code was able to correct them. Using Llama I could run the models in API mode, making them compatible with the OpenAI SDKs.
From there, I wrote a script that checked a designated “input” folder in my Dropbox for new files. When it finds one, it would convert the PDF to an image (the local LLM doesn’t / wouldn’t accept PDFs directly), and then pass it to the LLM. The LLM returns the description part after the dash. I format that, concatenate it with the current date, and then move it to the destination folder.
Overall, the implementation was a success. There are a few bugs still. Primarily, it sometimes gives me a completely whack name like 04_05-A_short_descriptive_title_for_the_document,_suitable_for_a_file_name..pdf. I’ve also still not given it full access to my Dropbox documents. The “destination” folder is currently just a sibiling to the input folder. I still have to move them into the right place. Once I get some more of the bugs worked out, I’m going to give the project access to that too. It should have tool access, so I’d really love to give the LLM the ability to just put the folder in the right place as well.