Jump to content

clarence

Premium Member
  • Posts

    191
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by clarence

  1. The one I’m looking at has 12GB. Is that too limiting?
  2. Quick try with decals. I finally found a good use for my collection of vintage circus posters. This feature will make my life so much easier. The implementation really is spot on. Maybe a switch for locking the aspect ratio of bitmaps would be nice (for dialling in an exact size), but overall it’s fantastic.
  3. So, that’s why I found a cheap one : )
  4. Perfect, thanks! I will check out the various cards. Is 2080Ti a safer purchase than a 3060?
  5. Amazing! Well done. It looks like I will need a new computer soon. What half-decent GPU would you recommend for UE5 as a minimum?
  6. It’s really easy to get started. And the trial runs for 45 days.
  7. Just finished the download. Is it ok to skip Easter this year to play with scatter and Cosmos instead?
  8. Looks useful indeed : ) There were very few outfits available when I tried MetaHuman Creator a few months ago. But I’m sure that will change soon.
  9. The machine is not making the choices. The artist does. It’s like multi-dimensional photography. The artist chooses when and where to take a photo.
  10. How hard is it to change outfits and pose MetaHumans?
  11. I would say that your description of AI is a bit simplified. Sure, the AI is taught how to “paint” by looking at thousands of artworks. But then they mix everything up in a big bowl, finding new connections and meta-concepts that the human brain is almost incapable of discovering. “Drawing in the style of xx” is just a tiny fraction of AI art. Instead, consider what a drawing halfway between Monet and Mondrian would look like. Now, add thousands of other images in a ridiculously complex web of connections and identify a new “middle form” between the Monet-Mondrian drawing and any other image. Literally, ANY other image, art or not. Then fo it again. And again. It gets crazy pretty fast (but not necessarily good). And the end result will have almost nothing in common with the starting points.
  12. My experience with AI imaging and writing for the past 2-3 years is that the systems still need a lot of handholding. It might look cool when an AI spits out marketing blurbs by the second. But in reality, only a small part is usable. If I get ten variants, maybe one can be used as a basis, with snippets inserted from four of the other. The rest is not usable. So, I still have to work with the text. It’s just that I don’t have to start from scratch. I must pick out the best parts, combine them into a coherent whole, remove any AI wonkiness, add my personal touch and so on. Which means I must know how to write and what makes a good blurb, otherwise the result will not hold up. And image generators works much the same way.
  13. Makes me wonder what the old modernists would think of Unreal and realtime rendering.
  14. I’m not a Vray user, but here’s how I would do it in Corona: 1. Make sure the model is scaled to real-world measurements. 2. Place the main object on a large “floor” (much larger than the current one). Create a shadow catcher material and apply to the floor. 3. Import the background image as a background in Vray (or C4d). 4. Light the scene with an HDR mimicking the sky in the background image (ie. time of day and weather - check the free HDRs at Polyhaven). Make the HDR invisible to the camera. Adjust the direction of the sun in the HDR to roughly match the background image. Play with the light intensity from the HDR. 5. Adjust the main object’s materials for more realistic reflections. Add subtle grunge maps to the metal as bump and reflection maps. To fine-tune lighting, you can add fill lights and vertical planes to act as light blockers and reflection boosters.
  15. Deeply impressive results. I’ve been using AI for limited tasks - mostly generating portraits and occasional ideation - for a couple of years now. Some of the results have been good enough to use in my books, but just barely. And not very reliably. DALL-E 2 looks like a significant step forward in quality. I just joined the waiting list, so I might be able to leave a report here eventually.
  16. Brilliant! Thanks. It’s pretty close to how I set up furniture shots, except for the HDR light source. And I haven’t played with highlight compression. I will have to give it a try : )
  17. Beautiful : ) It would be interesting to see a short tutorial on how you worked with lighting and materials in the chair scenes. The renders have a lovely “minimalistic luxury” look that’s not easy to achieve.
  18. Tone mapping and LUTs in Corona really speeds up the process. Finding the right look is much faster. In most of my projects the post-processing is in place even before the models are complete. Calling it POST- processing just doesn’t seem right anymore.
  19. It’s the same thing in music. As soon as audio perfection was achieved, musicians started using tape emulators, analog synth emulators, old-school mixer emulators. And so on. Adding “non-linear” aspects is almost ubiquitous in productions these days. And to my ears it really sounds better. The richness of an instrument run through pro gear, emulated to perfection - warts and all - is so much nicer than digital cleanliness.
  20. The lighting looks fantastic. I will have to try those hdr lights.
  21. Haha, don’t we all catch it once in a while! Fantastic renders anyway : ) Now, off to IKEA to see if they have a cheap ripoff in MDF.
  22. clarence

    Metahuman -> C4D?

    Getting Metahumans into C4d would be cool. Then into Marvellous Designer and back again to render in C4d/Corona.
  23. Wegner is one of my favourites among the Danish modernists. Your render captures the materials perfectly!
  24. I find that MD is very picky when importing meshes. I only work with stills, but I usually import an FBX or OBJ first in A-pose. After making the clothes, I import the final pose to the character.
×
×
  • Create New...