I wanted to test and see if AI generated images could be used as a fundamental component in a material art pipeline.
---
My tested process: Create a source image in Midjourney (text inputs only), import it into Substance Sampler (with some adjustments), move to Substance Designer to do some quick tweaks and fixes (mostly artifacts created by the pitch black areas in the source images created in Midjourney) and then render in Marmoset Toolbag.
---
These 8 materials are the culmination of 200 minutes of Midjourney (the full allotted time you get from the "basic plan"). Each base image for each material took around 10 to 50 iterations through Midjourney to produce - plus a lot of discarded experiments that didn't lead anywhere.
---
Something that I think is worth noting about AI image generation:
- The process doesn't take a shorter amount of time the more you get used to it - rather the opposite, as you go through more iterations when you know what you're looking for and what to avoid. This means that it doesn't scale well with production.
- The poor cohesion in lighting information and directions produced by the AI renders poor results later on in the pipeline (the height map gets messed up).
- Many times when I struggled with getting a remotely usable result from Midjourney I concluded that just using a search engine or a phone camera would be a much more efficient way of sourcing images to base materials on.
---
All in all, this approach (generating images to base materials on) doesn't seem to be the way of the future.
I have, however, seen what other artists have come up with using AI tools in regards to material art and I do think there are use cases other than what I've tried here (Stan Brown's DALL-E2 pipeline experiments, for example).
I'll probably do some more tests in the future in regards to those other potential use cases. But I'm out of fast hours in Midjourney for now, so I'll take a break.