A grave assignment

I am a lifelong friend and associate of Jim Eckford, who, with his wife Carlen, operate the Rancho Burro Donkey Sanctuary in Arroyo Grande, California.

The sanctuary has been in business for over 20 years, and has been home to as many as 16 burros and a couple of mules during that time. The animals come from various places, they are usually in jeopardy or grave danger, and at the sanctuary they are given lifetime homes. They are treated extremely well there.

The Eckfords also provide me with a place to have my wood shop, on the second floor of their barn at Rancho Burro. There I make stuff, fix things, and get into all sorts of trouble. There I have my tools, including the Avid CNC machine that I have written about in these posts over the years. I also have the usual table saws, band saws, sanders, planers, and dust collection systems that make my shop a great place to work.

When the sanctuary needs a cabinet or a sign or a repair, I am often called to provide these things. I do it with a smile, because I have become a part of the sanctuary family. And I have been friends with the donkeys as long as they have been housed there.

And, over the years, some of the donkeys have died, usually as a result of disease or incurable injuries that were incurred before they became members of the Rancho Burro family. Those donkeys have been buried on the property in a small graveyard where they can be remembered by the people who loved and cared for them.

We needed a way to honor them, so I designed a grave marker to identify each of them.

This is the basic grave marker master pattern. The final markers are about 20 inches tall and 9 inches wide.

These grave markers had to meet a set of criteria: they have to be weatherproof, sturdy, easy to read, reasonably easy to make, and able to withstand the sun and weather for a long period of time.

Our common friend, Steve Triplett, who is a master luthier and maker of beautiful harps, comes into the story at this point. He owns a saw mill. Steve is a volunteer at the donkey sanctuary, and has provided much love to the donkeys. I turned to him with my list of grave marker criteria, and he told me that he had just what I need: Black Locust lumber.

The lumber in question he milled from a tree that fell near the town of Santa Margarita, about 20 miles from my home. He sliced the log into two-inch boards and “stickered” them in piles behind his shop. There, they have been drying for over a year (lumber dries very slowly outdoors).

He and I took a few of those boards and re-sawed them on his giant band saw into one-inch boards. I took those to my shop where I planed them to a slightly thinner size, and sanded them to smooth finished lumber.

Then I designed the grave markers in Adobe Illustrator, sharing the designs with Jim and Carlen, and getting their approval of the plan.

I ordered some epoxy resin from a company called Total Boat, and ordered a small bottle of black colorant for that epoxy. Then I made a prototype grave marker.

The technique is to make the design in Illustrator, then modify it very slightly so that it will cut correctly on the CNC machine. Mostly this involves correcting the thin parts of letters very slightly to ensure that the 1/16 inch cutter I use to rout the pattern into the wood is cleared.

I use a lovely type font called Arno Pro, designed by Robert Slimbach of Adobe Systems. Even using the bold variation of that font, there are usually some thin lines that are less than 0.0625 inch across. This would cause the CNC software to skip those lines, creating gaps in the lettering. To overcome this problem, I draw a 1/16 inch red circle in Illustrator, and I drag it around the design and put it into the thin parts of the lettering. Where the red dot is larger than the stroke of the letter, I adjust the anchor points of the type as little as possible to get the red dot to clear. That ensures that the lettering will be cut correctly on the grave markers.

Here is the lettering for a dog named Levi. I put the red dot in the thinnest areas of the letters to test their width. If the dot is larger than the stroke of the letter, I modify its anchor points very slightly to ensure that the CNC machine will cut that part of the letter.

I drew a donkey silhouette in Illustrator, working from a photo, and use that to cut the image of the donkey into the grave marker. When the family’s old farm dog died last year I added the outline of a Golden Retriever to my Illustrator library, and this year I added a cat image to make a grave marker for our favorite barn cat, who died of old age.

This is the grave marker for Henry, who died late last year. The bas relief routing is 0.125 deep, cut by two end mill cutters: the donkey and the lettering by a 0.0625 cutter, and the border with a 0.125 cutter.

I put these elements together to make the grave markers, and then I cut the images into the Black Locust lumber. It’s nothing fancy: I rout the images 1/8 inch deep into the wood with a tiny 0.0625 cutter. When finished, I sand the surface, and fill the bas relief with black epoxy.

This is the grave marker for a donkey named Henry. I have poured black epoxy resin into the bas relief of the CNC routing, where it will harden. After that, I sand the board in a drum sander and that creates a clean image of the lettering and the donkey silhouette in the wood.

When the epoxy is hard, I run the board through my drum sander, which removes the epoxy above the top surface, leaving a beautiful, crisp image of the donkey (or dog or cat) and the border in the surface of the wood.

Here are four of the grave markers ready for their concrete bases.

To hold the markers so that they can be put into the ground, I have several inches of the board that goes below the lettering. In that part I cut four holes, and into those holes I insert more Black Locust wood to make cross-braces.

Then I suspend the grave marker with the cross bars in a mold that I built, and I pour 12 pounds of wet cement into the mold, creating a base for the grave marker that will keep it straight and solid when put in the ground. The cement also protects the wood from getting wet and rotting in the ground.

This grave marker has its cross-ties inserted and is suspended in the mold. I pour a soupy mix of cement into the mold, using a grade of cement called “SPEC” which has no aggregate. The grave marker sits in this mold for two days while the cement hardens. Then I break it out and use the mold again.

In the end, I apply Watco Danish Oil to the Locust wood, which makes the lettering really stand out against the natural color of the wood. When that oil dries, I apply a coat of Spar Varnish to protect the wood outdoors (I am skeptical about this, but time will tell).

After curing in the mold for two days, I break the grave marker out, and clean the mold to reuse it for the next one.

Here are seven of the nearly finished grave markers with their concrete bases. I will apply a coat (maybe two!) of Spar Varnish to the wood, and then they will go into the graveyard to mark the resting places of our departed donkey (and dog and cat) friends.

The grave markers look very nice, and I am confident that they will be lovely when placed in the sanctuary’s graveyard. We plan to place them in the coming weeks (as soon as the rain stops).

With those we can remember our donkey friends and show respect for their time on this earth.

Posted in Art, Building things, Woodworking | Tagged , , , , , , , , , , , | Leave a comment

The Smyth book sewing machine is an extraordinary device

For over five years I have been working to restore a 1935 Smyth book sewing machine. You can read several posts on that topic here, mostly related to the mechanics of getting the machine running after a long dormancy, and subsequent electrical and electronic improvements to get the machine turning again.

The machine is part of the collection in the Shakespeare Press Museum at California Polytechnic State University in San Luis Obispo, California.

This is the working part of the Smyth machine. It has blocks to hold up to eight needles, and up to eight crochet hooks. The threads come in from the top of the machine and wend their way to the needles. Stitching is an elaborate process of thread being poked through a hole in the saddle of a signature, passed to a crochet hook, then passed back through the needle hole and out again. The result is a sturdy sewn book block, ready for a cover.

I had declared in one of those posts that I had succeeded in getting the machine to work again. This was not entirely correct. I had the machine making single rows of chain stitches using its needles only. It was not making the cross-threads inside each signature, nor the chain stitches that run parallel to the primary stitches. At the time I was unconcerned that it wasn’t behaving perfectly. At least it was behaving.

My unconcern was rooted in the fact that there was no demand for Smyth-sewn book blocks in our department. No one was hankering for this service. This despite the fact that Cal Poly teaches a class called Book Design Technology where students learn how to sew and bind books, and they study machine sewing and binding in the process.

The instructor of that class, Prof. Donna Templeton, asked me late last year to demonstrate the machine for her students. I did so, but I was frustrated by not being able to demonstrate it working perfectly.

This is a close-up of the machine as I have it configured for 7-inch book blocks. The red dots indicate needles while the blue dots indicate crochet hooks. Each pair of needles and hooks make two parallel rows of chain stitches.

Dr. Templeton asked me, in January of this year, if I could get the machine working in time for the students in our TAGA chapter to use for their annual technical journal production. I said yes. (TAGA is the Technical Association of the Graphic Arts.)

Then I began the process of troubleshooting the 90-year-old machine to solve the stitching problems.

I have the instruction/parts book that came with the machine. I read it from cover to cover. I attempted to adjust the variable settings on the machine, and I tried numerous times to get the parallel stitches to work. It didn’t respond.

Needles are similar to sewing machine needles, with a trough down the narrow end to accommodate the thread before it passes through the eye.

Then I dug in. I knew the machine inside and out, having disassembled and reassembled most of it, after it arrived in the museum. I understood in principle how the sewing process works, and I had visited a bindery in Los Angeles County to see their machine, one identical to ours, running. I took photos and videos, and I had an hour of training on the machine so I could operate it.

Crochet hooks are the same length as the needles, and are adjacent to them. Unlike the needles, though, the crochet hooks rotate after catching the thread, then pull the thread up and out of the signature. Once outside, they rotate back, and carry the loop to the next signature, where they drop it and catch the next thread, pulling the new loop through the previous one, and creating a chain stitch (see the illustration below for more on this).

Back in the museum I sat, frustrated, in front of it, feeding folded signatures into the machine, and having it poke holes and stitch single rows of thread through the spines of those signatures. The signatures were not sewn firmly together, nor would they stay together after they were sewn.

The punches are mounted under each needle and crochet hook. They poke holes in the spine of each signature that allow the needles and hooks through to make the stitches.

I tried to understand the thread tension adjustments, thinking that I could make successful book blocks by tightening the tension. That didn’t work. I read the troubleshooting guide in the original manual to no avail. It was just not working correctly.

Eventually I removed all the needles and crochet hooks from the machine, and figured out how to remove the punches (see the illustrations for more on these components). I scrubbed all the parts and surfaces with solvent. With these parts now sparking clean, I started over, putting new needles and crochet hooks into their blocks. I put the original punches back also, being careful to place them opposite each needle or crochet hook (this is essential). Once I had them all lined up, I tried again to sew book signatures. The machine refused my efforts, continuing to make rows of straight chain stitches and no more.

So I stared at the machine. And stared, assuming that I would eventually see what was wrong.

At one point I disassembled the saddle mechanism again so that I could analyze the operation of the needles, hooks and punches. Inside the saddle is a component called the Loop Hook Bar. It carries eight loop hooks across against the back of the needles. The purpose is to snag the thread from each needle and carry it over to the right, then hang that thread on one of eight adjacent crochet hooks.

This is the seven-step process for sewing a single signature. All of this happens in about one second. Subsequent signatures are sewn together by repeating this process. Click on the image to enlarge.

I turned the machine by hand, observing the Loop Hooks as they moved from left to right. I noticed that they were not hooking the threads at all. They were arriving too late; they moved to the right, then retreated back to the left, leaving the threads on their respective needles. I couldn’t figure it out.

The Smyth machine is cam-operated. The machine has a common driveshaft that runs through the center of the machine, and on that shaft are nine cams, each about 14 inches in diameter. Some of those cams have patterns cut into their faces; some have patterns cut into their rims. Some have both. One of those cams causes the needle bar to descend into the spine of a signature, then to pull the needle bar back up, lifting all of the needles and their threads back out. Another drives the Loop Hooks left and right.

My observation was that the Loop Hook Bar was out of time with the other parts of the machine. It was arriving late and retreating early. I crawled on the floor with my brightest flashlight and examined the operation of the cam that operates that bar. It was the only one that made sense. I loosened its setscrews with a socket wrench, and attempted to rotate it relative to the other cams. It wouldn’t budge.

This is a diagram of the two rows of chain stitches. Each pair of rows is made with the same thread, passing back and forth inside the signature. The thread enters the signature on the left (carried by the needle), then it travels to the right, and out (crochet hook row) and is twisted into a loop. Then the thread continues back down into the signature, and over to the left where it re-emerges with the needle and is stitched to the next signature. It’s all one thread that makes this dizzying path.

While down on the floor I noticed that all of the cams on the machine are keyed to a common key-way that runs along the main driveshaft. Each cam has a setscrew that is tightened into the key-way. In theory, if each cam is aligned with the key-way, the machine cannot be out of sync.

The machine was confounding. It refused to work; it refused to be in time, and thus it would never sew the parallel lines of stitches that make Smyth-sewn books so sturdy.

I made another trip to Los Angeles to visit a book bindery where there is a working Smyth machine. This one is younger than ours. It’s a Model 15 (ours is a Model 12). It was probably manufactured in the 1950s. Its serial number is in the thousands. Ours is in the hundreds. I was allowed to take plates off and look at the workings of this machine; I stitched a couple of book blocks. I took photos and videos, and I brought samples home that I had made on that machine.

I readjusted our machine and studied its behavior. It was just crazy. It did everything out of time, with the punches coming up after the needles retreated.

Pressure: I am working on a deadline here. The students’ books must be sewn in the next week. The machine must work in time for this project to be completed. I was on hooks and needles trying to solve the problem (the traditional phrase was pins and needles, but this is close enough).

I stared at it, and studied its behavior again and again. I was stumped.

At one point I was inching the machine through its steps, and the Loop Hooks missed again. I backed the machine up to see it more closely. I turned the handwheel forward and backward. And when I turned it backward a second time I realized that the timing was correct when I turned it backward!

Was our machine running backward?

I inched it backward a few times and observed that it worked correctly when in reverse. This was entirely my fault. I assumed that the machine (and its handwheel) turned clockwise when running. I wired the motor to turn clockwise. And I have been running it clockwise for several years now, and it has not been sewing books correctly exactly that long.

I had never observed the rotational direction of either of the two working machines I visited. Both of them in fact operate with the drive wheel turning counter-clockwise! Since I had never seen a Smyth machine run prior to installing the new motor, I assumed that it would turn clockwise. Why would it not?

So (or sew…) this morning I rewired the motor to turn counter-clockwise. That took just minutes (three-phase motors can be made to run in reverse by switching any two of the three power wires). I started it up, and it now turns counter-clockwise.

I carefully checked the position of all the needles, crochet hooks and punches, tightened everything and cleaned all the handling surfaces. Then I turned it on and fed a signature into the machine. The punches come up through the spine; the needles and crochet hooks then come down through the holes made by the punches. The Loop Hook Bar slides from left to right, catching all of the threads (all but one in my case) and pulls them over to the crochet hooks. Then the Loop Hook Bar returns to the left, and the needles and crochet hooks retreat upward and out of the signature, carrying the thread with them.

It works! For the first time in our possession of the machine, it works!

This is a close-up of the spines of numerous signatures, sewn together by the Smyth machine. The left column is the needle column; the right column is the crochet hook column.

I stitched a handful of signatures, checked the tensions, and made a few more book blocks. My second-from-the-left Loop Hook needed to be adjusted. I did this by loosening the screw that holds it in place, and I stuck a tiny piece of tympan paper behind the top half of that hook, and tightened it again. That moved the hook 0.003 inch closer to the needle, allowing it to snag the thread successfully. And, with this small adjustment, I made the machine work. All six threads are being captured, all are being lifted and twisted, and all are being chain-stitched to the next signature. It’s very satisfying!

Addendum March 11, 2025: The TAGA students did have their books sewn on time! We gathered all the signatures, prepared the Smyth machine by setting the width of the books and the depth of the delivery table, and we started sewing.

Almost immediately, we had a strange mis-feed, where the front half of a signature folded up and over the previous signature, and we broke three crochet hooks. Ouch! We also bent a needle. I replaced all of these and we started again. Soon, we were sewing signatures again, and it was running quite well. We had a total run of 50 books, and we made it through the first 35 before we had to quit for the day. We will begin anew tomorrow, and we’ll finish the project.

We are applying a thin coat of bookbinder’s glue (PVA) to the spines immediately after we take them out of the machine. We found that this helps to prevent the unraveling of the chain stitches that can happen with this kind of sewing. (The Smyth machine we have included a built-in paster, which I removed; since we are not a full production shop, I didn’t want to have to deal with paste on the machine, and the clean-up after a run.) I kept all the parts in case I ever need to replace it.

I will post more after we finish the project.

Posted in Bookbinding, Building things, Gadgets, Printing and Printing Processes | Tagged , , , , , , , , , , | 2 Comments

My giraffe in a jacket – more AI madness

I’ve been experimenting with AI, and having considerable success. In previous blogs I wrote about my tremendous success with ChatGPT, with which I was able to convert printed text into editable text for a book I was trying to re-publish to get it back “in print.” Every other technique I used failed, and I had more or less given up until the AI engine helped me to succeed on that project.

This is my Giraffe in a Jacket generated by Adobe’s AI Generative Workspace. I didn’t ask for the jacket, but the AI engine persisted in providing warm jackets to all of the giraffes it created.

I found myself recently needing an image of a giraffe. I turned to Adobe’s Generative tools to get an image for this project. I needed a tall rectangular image of an entire giraffe standing. I did not want any background. I started with the following prompt:

Please render a full body image of a giraffe, standing facing the viewer, no background. The giraffe is standing in a field.

As with many of my previous experiences with AI engines (all of them), the engine mostly ignored the key points of my prompt, and rendered a shoulders-up image of a giraffe. It’s a nice image; I was partially successful.

The second image is about the same: shoulders-up, giraffe facing the viewer. But on this giraffe there is a men’s jacket! I thought this was pretty funny. Abercrombie and Giraffe?

Adobe suggests that we make successive generations of images in their workspace. I tried several times without changing the prompt.

After five tries, Adobe gave me about half a giraffe, stopping mid-body.

This is better! The jacket is gone, and I have much more of the animal.

After six tries, the Adobe engine gave me a full-body image of a giraffe, standing in a field. This set had no jackets. On the sixth iteration I got what I wanted.

Finally! I got the whole giraffe.

Where did it come up with the “idea” of a giraffe wearing a jacket? Does AI have a sense of humor? Was it being cheeky on purpose? Did some part of the AI system “think” that I wanted a giraffe wearing a nice leather jacket? A cotton jacket? A safari jacket (how ironic!)?

I played along, generating over and over, eventually getting a perfect image in the right arrangement for my project. I’m impressed.

And, I am also amused by the absurdity of my rather large collection of giraffes in safari jackets. Perhaps I should start a mail-order business to provide these nice coats to the giraffes of the world.

Here are 15 of the Ai engine’s attempts to respond to my prompt. It looks like a catalog of Giraffe jackets. The latest fashion on the savannah! I kept reminding myself that I never asked for a jacket.

I have also been experimenting with AI in Adobe’s AI and trying to determine if applying a larger color profile when opening the image in Camera Raw might increase the color volume of the images created by AI. I’ll write that blog next. Be sure to check in later for that.

Posted in Adventures, Art, Artificial Intelligence, Photography, Photoshop techniques | Tagged , , , , | Leave a comment

I’m holding a building permit in my hand!

This is the latest in my posts about building a new show building. Click here to go to the beginning of the story.

After straightening out the issue of the Scenic Highway and Railroad Code, I was required to sign a host of documents sent by the county planner. These included notes about proper removal of construction waste from the property, inspection responsibilities, and acknowledgements about workers’ compensation insurance coverage. I signed, initialed, and returned the documents, and then I heard nothing for a few days.

This is not at all what a building permit looks like. They are 8.5 x 11 white paper with the words “CONSTRUCTION PERMIT” printed across the top. This one is more impressive!

When I asked about the status of the permit I received a prompt reply that my permit was approved and ready for download. Really? Seriously? I went to the County’s contractor portal, and as promised, the permit and several associated documents were there, ready for me to download. I did it quickly, paranoid that they would rescind them again, and require me to prove that my building is not visible from the Moon.

The master drawing set is now “stamped” with construction approval notices in red on every page. All of these transactions have been done digitally so far. I have not (until today) committed anything to print, which has saved me an immense amount of time, paper and ink. My drawings, and all of the drawings submitted by the various engineering firms, soils engineers, and strucural consultants, have been electronic.

The first page of the document is dated on the original application date: February 14, 2024.

Today, in honor of the approval of my application, I started printing the final drawings for the contractors to use in construction. I have 38 pages of primary drawings – structural, design, electrical, topographic, grading, and more. These are currently printing on my wide-format Epson printer. I bought two rolls of engineering paper (plain paper) to use for this. I’m printing at low resolution (720 ppi), and it’s going quite quickly.

This is my Epson printer printing pages for the contractors to use when building my new shop!

My printer software is treating it as one big job – 76 feet in length. I will have to cut the individual pages and then bind them for the builders. I will use copper grommets, set with a hammer, to hold these large (24 x 36 inch) pages along the short dimension.

This is getting exciting! I have a few tasks to complete before the grading is started. I need to revive my time-lapse camera and its weatherproof box, and then set it up on the hillside to photograph everything that happens from cutting into the hillside to erecting the building and having an open house party!

This will be fun. Stay tuned for progress reports. I’m sure it will be months before I can show you a photo of the finished building.

Posted in Adventures, Building things, Woodworking | Tagged , , , , , | Leave a comment

Adventures with Artificial Intelligence images

I have had considerable success with Artificial Intelligence tools. Most of my success has been inside Adobe Bridge and Photoshop, but I also have had tremendous success with Chat GPT for publishing a catalog from an out-of-print booklet. Click here to read how that project was done.

I’m most impressed with Photoshop’s AI Noise Reduction. That one is the most productive for my photo work, and it is the one that has me gushing in praise of what AI is capable of. You can read my praise of AI Noise Reduction here.

And, I have also been toying with some of the AI engines to create images from digital whole cloth (whole pixels?).

A few months ago I purchased the blognosticator.com domain, after years of trying, and I wrote about that in a previous blog post. On that post I needed an illustration of a man putting up the .com suffix. I started by looking at stock images of men putting up things, then I used one of those images as a model to draw my own man putting up the .com suffix. I made it look like a blueprint (a technology largely unknown to most people these days). But I liked the motif so I did it that way. Click here to see that illustration as I published it.

After completing that post, I moved on, working on my building permit application, getting ready to teach again (I’ve been hired back at Cal Poly for the year), and going off to Burning Man again (no rain this year).

But there was a nagging thought in my head… could I have asked an AI engine to draw the man putting up the .com suffix? I decided to give it a try.

Attempt No.1: Chat GPT: I put in a prompt asking for a man putting up a poster, the man’s back to the viewer.

Response: Chat GPT told me it can’t create images.

I tried Chat GPT 4.0 today, and got the same response.

Attempt No. 2: Google Gemini: I entered a prompt asking it to:

Create a photo of a man putting up a poster on a wall. His back is to the viewer, and he is wearing a hard hat.

I received this response: Generating images of people is only available in early access with Gemini Advanced. Get early access to new Gemini features when you subscribe to Advanced here. I signed up for the free trial period and proceeded.

I tried, and succeeded in getting a photo quality image of a man putting up a poster. With “refine” prompts I was able to get Gemini to include more of the ladder. In the end, I was pretty happy with those images. Here is my favorite:

I liked this one the best. It allows me to have the man installing the .com suffix on a sign.

When I tried, as a stand-alone assignment, to get Gemini to draw the letters .com as a large sign on a building, I didn’t fare so well. I got Gemini to render the word “blognosticator” in lower-case letters as a sign on a building. And, when I asked for the .com, it succeeded. I tried several “refine” prompts, each one got worse. Eventually I stuck with the first one, which is OK.

I put the man putting up the poster into that photo. He’s putting up the .com suffix now. It turned out pretty well.

I ended up not using the ladder at all! I put the man into the AI-generated sign of blognosticator.com. Both contributing images were made by Google Gemini.

My love of street art mixed with Artificial Intelligence

I have recently attempted to get each of the AI engines to draw the word “BLOGNOSTICATOR” on a red brick wall in graffiti style. I figured that this would be pretty easy for these brainiac applications. I was very wrong. It turns out that Chat GPT does words really, really well but not photos. The other AI engines also handle text quite nicely. But the photo-generating part of each one has a severe problem with spelling. I would have thunk that spelling would be easy for these programs because letters can be put into a matrix and then rendered.

I started in Adobe Photoshop, using its Generative AI tool. I put in the following prompt:

Please make a photo of a red brick wall. On that wall is the word BLOGNOSTICATOR in graffiti style, with bright colors and clever lettering.

Photoshop chewed on that for a couple of minutes and then presented this:

Badly spelled, but OK otherwise. I wasn’t thrilled.

Curiously, the word “BLOGNOSTICATOR” was misspelled. I thought it might be a one-off error, and added double-quotes to the word to get the AI engine to understand that it was literal. Usually problems in text processing or computer-learning are improved by specifying words that are not in any dictionary as literals.

And, Photoshop’s AI failed again. And again. And again.

I tried simpler words to see if it would misspell every word. I asked it to paint the word “STITCH” on the wall, and it came back spelled “STCH.”

Really?

I switched to Microsoft’s Copilot software to see if it could do the BLOGNOSTICATOR image, or any image with words.

Copilot fell on its digital face with BLOGNOSTICATOR, but it did succeed with the word APPLE. I was thrilled, but I didn’t need APPLE. I asked that application to generate more images of the word BLOGNOSTICATOR on a red brick wall.

Of all the AI systems I tested, Microsoft’s Copilot produced the most pleasing graffiti style illustrations for me – despite the misspellings. The three images it created are stunning. I love them. I saved them and opened them in Adobe Photoshop and edited the word BLOGNOSTICATOR by hand so that it is spelled correctly. Now I love these images even more. They are creative, showing real style, where the other engines I have tried produced more pedestrian images. They produced good images all the way around, but none showed the level of “creativity” that Copilot did.

If only my blog were called BLOGNOTISTTOR! I love the art, but the spelling is terrible. This, and several other extraordinary images were generated with Microsoft Copilot.
With some hand modification in Adobe Photoshop, I was able to make this image perfect, and I LOVE the artwork! This exceeded my expectations.

One image produced by the Adobe Photoshop AI system proved that Adobe’s software is also capable of “creativity.” I asked for the man putting up a poster. The first result is incredible!

The man is beautifully rendered, and the composition is excellent. What showed “creativity” is that the man has drawn a self-portrait on the poster (I didn’t ask for anything to be on the poster). It’s by far the best AI illustration that I have received yet and the one that shows computer introspection.

This is my favorite image so far. It was created with Adobe Photoshop’s AI Generative Fill tool. I have not modified this image at all, except to reduce its size for this blog.

As I learn more about these tools, I will post more. Please stop by from time to time and see what I have wrought – by typing prompts on my keyboard. And, if you would like to be notified when I post new stories, there is a button way down at the bottom of this page that you can click to be informed of new posts. I have over 450,000 readers now.

Posted in Adventures, Art, Artificial Intelligence, Technology, Typography | Tagged , , , , , , , , | Leave a comment

Permit approved and rescinded!

This is one of a series of posts about building a new shop building. To read the first of these posts, please click here. Each has links to the next.

The Grading permit corrections document arrived the next week. That was round one for that process. Questions on that document included underground utilities (only water and electricity in my case), the replacement of soil after the utilities are delivered, and a number of questions about run-off and water capture. In our county, all rainwater that comes off of a roof must be captured and put into a catch basin so that it will percolate into the water table or run into a natural waterway. Uncontrolled storm run-offs are not allowed, in order to prevent erosion.

The catch basin is already built, and my building will ultimately be connected to the underground pipes that feed that basin. This is relatively simple for me; two rain gutters will be attached to the building, and the outflow of those will be connected to the existing pipe.

There is a paragraph about archaeological studies of the property. Since there are recently built structures on the same land, I asked the property owner for more information on that topic. I think it has been covered in the previous building permits.

This is one part of the grading permit application. My building is the rectangle on the lower-right. To the left is an existing barn. The catch basin is at the top-center. This drawing includes topographic, grading, soil compacting and water run-off and capture information.

The County eventually waived the archaeologist’s report, as one had been done earlier for the same property. They did insist that the building might be in sight of a scenic highway. I looked up “scenic highways in San Luis Obispo County” and the road near my shop is not on that list, so I asked the County for clarification. They insisted that the nearby road is a scenic highway. Further research showed that the entire ridge of hills adjacent to the property is included in the Scenic Highway and Railroad code. I had to either comply by adding landscaping and repainting my building a darker shade of gray, or showing the County that my building is not visible from a scenic highway or railroad.

I took my camera, and drove out to the nearby roads. Then I parked and hiked along the road, taking a photo toward my building site every 200 feet. Surprisingly, my building will not be in sight of this road. I submitted this photo and an explanation of the sight-line. They didn’t respond, but a few days later I got an e-mail telling me that my permit had been approved.

This is one of my photos, showing that my building will not be visible from the closest major road. The red arrow points to the location of the building site. This documentation was eventually successful with the planners. The Santa Lucia mountains are in the background.

Now all I have to do it send them another $1,600 for the permit, and I will be able to begin construction!

It took just over eight months and a tremendous amount of money to get this permit, and I plan to get started as quickly as I can.

Stay tuned! I’m resurrecting my time-lapse camera and outdoor box, and I will install this on the hill above my building site before we begin work. My plan is to document the entire process. I’ll post a link to it here.

Addendum: October 1, 2024: The County rescinded my permit, citing the Scenic Highways and Railroads code again. They were apparently not satisfied with my response. So I went out and took more photos. I showed views from two nearby roads with arrows showing that my building site is not visible from those roads. Then I took one more from the location of the closest railroad (2.3 miles distance). It is impossible to see my building from the railroad tracks, thank goodness. (I wouldn’t want any Amtrak passengers to be offended by my shop.)

I submitted a second draft of that document, and also called to talk to the planner, who, after some review, agreed that my building will not be visible from the roads in question, nor from the railroad. I received a note from the planner indicating that my document was accepted and that the clearance was granted.

Two small clerical items remain, but when complete, I think I will get the actual permit. Today? Probably not. It’s Friday.

Note: On my interactions with the county planners: I haven’t meant to demean any of the planners and inspectors with whom I have had interactions in this process. Each person I have conversed with has been professional and courteous. These posts show a certain level of frustration with the process of applying for a building permit in my county. It is obviously a complex process involving laws, codes, ordinances and local rules. Getting approval of all of the necessary steps in building is difficult, and I think that I have succeeded, in large part with the help of professional engineers, contractors and builders, and with the various county planners and reviewers.

Posted in Adventures | Tagged , , , , , | Leave a comment

AI Crop tool in new Photoshop is delightful

In my previous two posts I have talked about features in the latest (currently beta) version of Photoshop that are worthy of your attention.

This is another. Many (many!) times in my career I have wanted/needed a photo to have a little bit more on the sides, or a little bit more on the top. Cropping was destructive, and it proved challenging to get the right result using the tools we had at our disposal. Even with a Crosfield drum scanner and film recorder, there was no easy way to expand the canvas under a photo. We resorted to clever cropping, and even used non-proportional scaling occasionally (only when it was not obvious).

This is a photo I took at Burning Man this year. The woman is Renée Rose, who was walking on stilts into the ring surrounding the Man on Saturday night, August 31. I took the photo with my Canon R5 and my 100-500 RF lens at 500 mm. The photo benefits from AI Noise Reduction (see the previous blog), and two small corrections using AI Erase in Adobe Bridge. Here I am using the Crop tool in Adobe Photoshop to expand the canvas on the photo. With the new AI Generative Fill feature, Photoshop will expand the canvas and fill with more of the background colors.

Here we are, years later, with artificial intelligence making it possible to stretch the canvas to increase its size, making it possible to make photos work better in spaces.

This is the Generative Fill option in the cropping tool in Adobe Photoshop (version 25.13 is the latest beta).

It works like this:

Open the image. Choose the Cropping tool, and instead of cropping inward, crop outward. This would normally cause the image to be expanded, with the expanstion being filled with the current background color. In the new Photoshop there is an option to choose Generative Fill for the expansion.

When you click on Enter or click the check mark at the top of the screen, you get a prompt box. If you enter nothing into the box, the program will fill with more of what it finds along the edges of the photo. This is what I needed all those times decades ago. In theory, you could also put text into the prompt, saying, for example, “Expand with tomatoes.” I tried this, and it did not work. Instead, it filled the new space with more of the photo, creating about the same effect as the empty prompt.

This is the same image with an expanded background.

Since I didn’t need to fill with tomatoes, I declared victory, and am adding this tool to my list of favorites in the new Photoshop.

At some point I will try to fill with tomatoes.

Tomatoes? These were generated by Photoshop’s Generate Image function.
Posted in Artificial Intelligence, Software, Technology | Tagged , , , , , , | Leave a comment

This one is really impressive! AI noise reduction

Yesterday I wrote about using the new AI Erase function in Adobe Camera Raw. You can read about that here.

This past week I photographed the annual Festival Mozaic summer music festival. That involved 19 events in both San Luis Obispo and Santa Barbara Counties. I drove over 600 miles in 12 days to photograph all of these events. It was worth it!

I have been the staff photographer for the festival for 20 years, I think. In that capacity I have the opportunity to shoot still photos of some of the world’s finest musicians in performance of – mostly – classical and baroque music. The music director is Scott Yoo, who has an award-winning television show on PBS called Now Hear This. Mr. Yoo assembles ensembles for the various performances during the festival by hiring the very best bassoonist, the very best flutist, the very best violinists, and more, for each performance. He does this by choosing the right people for each musical presentation.

This year’s festival ended with an orchestra performance of Mozart, Wagner and Beethoven, presented in the Performing Arts Center at Cal Poly, an auditorium that seats about 1,000 people. It was the first orchestra work presented by Festival Mozaic since the pandemic, and it was the finest performance I have seen in my life. Seriously.

Over the years I have developed techniques for photographing performers in the various locations used by the festival. At Cuesta College Performing Arts Center I work in the control booth, above the stage and at the back of the hall about 60 feet from the performers. From there I shoot with my 100-500 Canon lens on my Canon R5 camera. This combination usually works well because I can make a “portrait” of an artist from that distance and fill the frame, or close. There is an open window there so I don’t have to shoot through glass.

At the Cal Poly Performing Arts Center I shoot from the back of the hall, about 80 feet from the stage. It’s a bit far for these portraits, but I can take photos of groups of players, or I can crop a player out of a larger photo. The resolution of the R5 is high enough that cropped images are still adequate for small print work and perfect for social media.

And in that same hall I usually shoot a few panoramic photos. These are my specialty. Over the years the stitching software I use, PTGUI Pro, has gotten better and better to the point that it stitches these images with essentially no errors. It never creates distortions, never makes odd overlaps, and always maintains the images as they were taken – sharp, in-focus, appropriate for a panoramic image.

Festival Mozaic Summer Music Festival 2024 Orchestra performance, Saturday, July 27, 2024. This is the final image (reduced in resolution for this post). It was made from nine images taken of the performance, stitched with PTGUI Pro software. This image used the enhanced Noise-Reduction DNG files as source images. Click on the image to see an enlarged view.

But shooting photos indoors of musicians in motion requires that I use a relatively high shutter speed – usually faster than 1/200 second – to stop the motion of the violin bows and the tympanist’s drumsticks. This requires that I push the ISO way up, because I am also trying to get enough depth-of-field in these photos to get every face in sharp focus. At Saturday’s concert I was shooting at ISO 12800.

On modern cameras such high ISO settings are not a big deal. 12800 is perfectly reasonable. I can use these images for print at full page size without the sensor noise being distracting. It’s certainly visible, but it is not going to prevent the use of the photos for high-resolution printing.

My work flow is to import my Camera Raw images through the Photo Downloader program that is an adjunct of Adobe Bridge (See my essay on that topic here). Though it is not the best software in the toolbox, it does this conversion correctly and quickly. The Canon CR3 files from my camera are read from the memory cards, converted to DNG files, renamed, then saved to my hard drive – all in one streamlined action. The result is that my “original” camera images are all in DNG format.

From there I work in Adobe Bridge. There I view all of the images, score them with one to five stars, delete bad ones, rename them in groups to describe their content, and organize them for editing. I touch every image, with very few exceptions. Most often my technique involves adjusting the exposure, reducing the highlights, expanding the shadows, increasing the contrast, and often adjusting the color temperature of the photos.

I open photos in groups that are similar, and apply these modest (and sometimes gross) adjustments and click “Done” to return them to their folders. Every photo gets a title, often applied in batches. I also embed extensive IPTC data into every photo. These entries include lists of the performers, the venue, the location, sublocation, the event, copyright, contact information, key words, and more.

For the panoramic photos I look at the source images to be sure that they were stepped acceptably when I took them (usually a 20 percent overlap). I look for troublesome images that might cause the stitching software to hiccup. Then I group each set of photos into folders named for their content: Orchestra pano 2, for example.

Most of the panoramas I shot last Saturday were about one-half stop overexposed. I open the whole batch together into Camera Raw, Select All, then adjust the exposure on all of them at once. I often also reduce highlights, expand the shadows then check the color temperature (theatrical lighting can be a bit warm). Then I click Done, and move to the next step.

Enhancing with Artificial Intelligence
Adobe Camera Raw has had a noise reduction control for several years, and it is quite effective. I usually consider its use when I zoom in on an original image and I see the telltale pattern of noise that is created by shooting at high ISO settings. This, I have always believed, gives me about one stop of noise reduction – it is the equivalent of setting the camera at a lower ISO setting – after the fact.

This is the location of the new AI Noise Reduction function in Adobe Camera Raw.

Artificial Intelligence Noise Reduction
The new feature in Adobe Camera Raw is a button that says Noise Reduction – De-noise. When you click on that button, a dialog opens with a slider allowing you to set the amount of noise reduction. The text in that window says that the program will use AI to reduce noise. The default setting is 50 units. I have found that using this setting works well. When you click the Apply button, Camera Raw applies its de-noise algorithm to clean up the image(s). When finished, it creates a duplicate file and names it with the original name plus -Enhanced NR.

This is an enlarged view of one image (150% view) where the noise of ISO 12800 is clearly visible.
Click on the image to see an enlarged view.
This is a view of the image after AI Noise Reduction. Click on the image to see an enlarged view.

It’s important to carry out these steps in this order to reduce the work needed in Photoshop once these files get there. In my case, the duplicate Enhanced NR files also carry the XMP data for exposure, highlight suppression, shadow enhancement, color temperature, etc.

Once Camera Raw is finished, I click on the Done button and return to Bridge.

Sending the files to PTGUI Pro
My stitching software – PTGUI Pro – can read JPEG and TIFF files directly, and I discovered today that it can read DNG files directly.

For the past few years I have been converting my DNG files into TIFFs, but perhaps that is no longer necessary. It would save one step, and might reduce the chances of errors occurring in the conversion (though I have never seen any). I ran a test of this work flow. I added exposure and color temperature modifications to the DNG files, then I opened them in PTGUI and processed the panorama. PTGUI read the files and stitched the image, but it ignored the embedded modifications I had made to the DNG files. So, this technique does not work for me.

This is my successful work flow for using AI Noise Reduction in Adobe Camera Raw, followed by converting the images to TIFF in Bridge/Photoshop, and finally stitching them in PTGUI Pro.

To get the files into PTGUI, I select them, choose Tools in Bridge, then Photoshop, then Image Processor (This opens the famous image conversion software invented by Adobe’s Russell Brown). Image Processor converts files in batches. There are numerous options in Image Processor; one of them is to convert to TIFF. I run this on all the selected images without changing resolution. Each image is opened momentarily in Photoshop, then saved in a new folder named TIFF.

From there, I open that folder, select all the TIFF files, then right-click and tell the computer to Open In PTGUI Pro. In that application I align the images as necessary, then stitch them into a cohesive panoramic image. This is so fast in recent versions, and with my new Mac Studio computer that its processing time is negligible.

Option: stitch in Adobe Photoshop
It’s also possible to stitch panoramas in Adobe Photoshop, but this does not work as well. Photoshop often makes errors when stitching panoramic images. It is found under File>Automate>Photomerge. I tested this today and found that Photoshop did a fine job of stitching the panorama from these files.

Extraordinary noise reduction
The final product clearly shows enhancement, and I think it is remarkable. It is visibly superior to manual noise reduction (or no noise reduction). The skin tones are smoother; shadows are free of the lattice-work of noise I usually find there. It took just one try to discover that this use of AI in the Adobe products is worth the effort, and it lives up to the hype that Adobe and others are making about artificial intelligence. This enhancement step adds about two minutes to the work for each image processed, maybe less. In the end, it is worth the effort, as your photos will look better immediately, and will not exhibit the tell-tale noise we usually see in high-ISO photos.

Posted in Panoramic Photography, Photography, Photoshop techniques | Tagged , , , , , , | Leave a comment

Another victory
for automated intelligence in photography

A few weeks back I wrote about my success in working with ChatGPT to manage some very complex text, making it editable, and making it possible for me to re-publish two out-of-print catalogs of matrices for Linotype and Intertype machines. It was an extraordinary success for me. You can read that post here.

Since then I have been experimenting with the AI systems offered by Google and Adobe, and hoping that these offer some assistance to me when working with high resolution images.

The first of these was a routine clean-up of a photo where a giant traffic signal was in the frame. I wanted it gone. I did it the old fashioned way first, using the lasso, content-aware fill, the clone tool, and paintbrushes. It took me the better part of an hour to complete, and it was perfectly acceptable.

When I showed this image to my friend Jason, he said, “I can do that with AI in 30 seconds.” I accepted his offer to demonstrate, and we arranged a Zoom meeting where he published his screen and demonstrated the process. In the end, including teaching time, it took about ten minutes, but the AI Erase function in the beta version of Photoshop (in partnership with Adobe Camera Raw) did the job much better than I had, and it accomplished it in a just minutes.

Following are the steps that he showed me:

This is the original image of San Luis Diagnostic Center with the traffic signals, a truck, and various shadows. These images were captured in Adobe Camera Raw. I have selected the traffic light on the right edge using the Eraser tool. Once that selection is made, I hit Apply, and the signal was removed from the image.
My friend Jason cautioned me not to select too much for the AI engine to work on, as that will often cause it to fail. Instead, I have selected a street light at the top of the pole. This was easy for the AI erase tool to remove.
Here, I have selected the horizontal beam of the signal with its many lights. Curiously, the AI erase tool had no difficulty removing the signal and repairing the tree.
In this image I have selected more of the arm of the signal. It passes over the terra cotta tile roof and into the stucco exterior. I hedged, and selected the part on the end.
Selecting the large vertical pole and its signals was most challenging for the AI Erase tool. It had to remove the steel post, then patch the stucco wall, the planter box and, most importantly, the tiles at the soffit. The tool took only about a minute to do this, and it did it flawlessly. It even replaced the right-hand planter on the white wall.
Here I have selected the street sign. The AI Erase tool removed it easily, I left the brackets unselected; I would get those with the clone tool.
Removing the third traffic light was easy.
The shadow left behind of the pole was removed.
Tire tracks on the pavement were removed in a group.
Last, the truck on the right was removed. I had to do this twice, as it didn’t work correctly the first time.

Elapsed time? It took longer – much longer – to write and illustrate this post than to make the corrections on the photo. 11 minutes and 20 seconds.

What have I learned? Like Photoshop’s Content-Aware Fill tool, with which we are all familiar, it’s smart not to bite off too much in each action. It would be impossible to remove all of the elements of the traffic signals in one pass. It works better in small morsels.

Overall, the photo is much better after. The clutter of the traffic signals, the tire marks, the sign and various artifacts made the original too busy for me, and for the client.

Posted in Art, Photography, Photoshop techniques, Software, Technology | Tagged , , , , , , | Leave a comment

The Linotype and Intertype catalogs

In the previous post I described how I published these out-of-print booklets.

These are links to the PDF versions of those publications. Please feel free to download them.

Click here to download the Linotype Matrix catalog in Font Number order. Updated July 12, 2024
Click here to download the Linotype Matrix Catalog in alphabetical order. Updated July 12, 2024
Click here to download the Intertype Matrix Catalog in Font Number order. Updated July 12, 2024
Click here to download the Intertype Matrix Catalog in alphabetical order. Updated July 12, 2024
Posted in Gadgets, History, Printing and Printing Processes, Restoring antique printing machines, Technology, Typography | Tagged , , , , , , , , | Leave a comment