Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/16/2023 in all areas

  1. If anyone is interested in a Scene Nodes version of this: emit_and_displace_circles.c4d
    2 points
  2. Maybe something like this: emit_and_deform_circles.c4d
    2 points
  3. It's the opposite though, as they have, thus far, been extremely careful to use only their own licensed material for training, and have pledged to compensate contributors to their stock library (that is being used for training their AI models). As with all things Adobe, we'll have to see how that pledge actually plays out, but so far all indicators are that they want to play fair. I suppose we will have to define what "significant human intervention" means. There already exist many apps that can produce compelling imagery at the click of a button (that don't use any AI). This is a much broader topic, and likely exceeds the scope of a single forum post, but the last 40 years of hardware and software progress have made it easier and easier to produce better and better imagery, with less human intervention. Rendering is literally a process whereby you press a button to relinquish human control to a computer to produce the final output. Yes, there has to be initial prep before you press that button, but all that prep is basically translating a human thought into a state for the computer to output. My stance is that it's the same with AI — except that it just makes all that prep work less of a chore. Basically, a tool that removes more technical barriers between an idea and an output. Replacing a tedious, technical process isn't necessarily a bad thing in the pursuit of content creation. There is definitely purity and value in pure mechanical and technical prowess, but in a capitalist world, easier methods will always take general precedence, especially if they can reduce costs. For me, a "significant human intervention" needs only be the creative idea or direction that a human can come up with. Being able to translate that effortlessly onto a certain medium should not detract from that. I'm in the beta, yes, and have played with it a bit. It's promising, but extremely limited in what it can currently do. Each generation is random - any further tweaking will require traditional methods. It also struggles with people, especially non-Caucasians. It's also heavily censored. I couldn't make an "exploding text" effect, because the word "explode" is banned by them, so good luck trying to make a "cool guys don't look at explosions" poster. I think Adobe's approach of adapting Firefly tech into its core apps (like Generative Fill in Photoshop, and the vector recoulouring in Illustrator) is the correct approach. By making it a tool that can do a certain something, and be complemented by the existing toolset, it works a lot better for actual work, rather than being a self-contained quasi-random image generator that's only useful for generating concepts, and falls apart when you get any specific notes/feedback. I don't believe there are any generation limits. It's online only, but I imagine that the final product should be able to work offline, at least optionally. If it works offline (like it should) I wouldn't want to pay anything above the existing CC sub. If it's online, I can imagine it's fair for them to charge an extra amount, I have no idea how that might be structured though. Having said that, I would be miffed if it was online only. There's zero reason for them to do that, except be greedy (and this is Adobe, so I do actually expect that).
    2 points
  4. I wad thinking about making a topic about this for weeks asking from users that use Firefly to share thoughts on copyright issues and experience using it until today when I stumbled upon this article: https://www.creativebloq.com/news/adobe-firefly-ai-legal-fees?utm_source=facebook.com&utm_content=computer-arts&utm_campaign=socialflow&utm_medium=social&fbclid=IwAR3f7vRA30r0U2WfCmjP3-DFgL5o6Imeey94o0roWYK_P01jjf0T_uGmeEg It seems Adobe is first on going commercial with its AI text-to-image generator with no legal concerns. Adobe is feeling confident about the training data as they were on creative commons licenses of its own libraries thus dodging any copyright infringements. But what about selling a computer generated image that had no significant human intervention ? Isn't this still of legal dispute ? And how would a professional cost such work ? Has anyone of you used Firefly ? What do you think of it ? Were there any limitations on generated content per day ? Does it work off-line ? How much would you pay to use an AI service ?
    1 point
  5. Taking some inspiration from the C4D forum, and thought this'd be a nice, fun and simple exercise. There's mask control, with a ramp, and you can specify the colouring and distortion based either on the age of the particles, or the mask. I've included the scene file here. This is the C4D thread: Emit Circles and Distort Them as They Move Outwards.hiplc
    1 point
  6. ooh I did try the trick with the scale but completely forgot about the falloff. this might work, thank you!
    1 point
  7. First, you have too many lights. Look up "three-point lighting" and / or product photography studio setups for some foundational knowledge. Second, your lights appear way too big, so no matter what intensity you set, the product is getting flooded with light. Once you get a three-point setup, positioning them will also be key to getting the overall look. Study up! 🙂 Flat metallic printing will still look mostly flat when the main light source is straight on. Consider giving the box a very slight rotation, so the front is a bit more towards the cups.
    1 point
  8. The main thing to keep in mind with reflective surfaces like foil is not how you light them but what they reflect. I recommend you study some professional images agencies use to market smart phones or watches and the like and study the screen reflections on the phones and the reflections on the watch glass.
    1 point
×
×
  • Create New...